Joint metric and feature representation learning for unsupervised domain adaptation

作者:

Highlights:

摘要

Domain adaptation algorithms leverage the knowledge from a well-labeled source domain to facilitate the learning of an unlabeled target domain, in which the source domain and the target domain are related but drawn from different data distributions. Existing domain adaptation approaches are either trying to explicitly mitigate the data distribution gaps by minimizing some distance metrics, or attempting to learn a new feature representation by revealing the shared factors and use the learned representation as a bridge of knowledge transfer. Recently, several researchers claim that jointly optimizing the distribution gaps and latent factors can learn a better transfer model. In this paper, therefore, we propose a novel approach which simultaneously mitigates the data distribution and learns a feature representation via a common objective. Specifically, we present joint metric and feature representation learning (JMFL) for unsupervised domain adaptation. JMFL, on the one hand, minimizes the domain discrepancy between the source domain and the target domain. On the other hand, JMFL reveals the shared underlying factors between the two domains to learn a new feature representation. We smoothly incorporate the two aspects into a unified objective and present a detailed optimization method. Extensive experiments on several open benchmarks verify that our approach achieves state-of-the-art results with significant improvements.

论文关键词:Domain adaptation,Metric learning,Feature representation learning,JMFL

论文评审过程:Received 29 March 2019, Revised 9 November 2019, Accepted 11 November 2019, Available online 19 November 2019, Version of Record 24 February 2020.

论文官网地址:https://doi.org/10.1016/j.knosys.2019.105222