Joint bi-adversarial learning for unsupervised domain adaptation

作者:

Highlights:

摘要

An important challenge of unsupervised domain adaptation (UDA) is how to sufficiently utilize the structure and information of the data distribution, so as to exploit the source domain knowledge for a more accurate classification of the unlabeled target domain. Currently, much research work has been devoted to UDA. However, existing works have mostly considered only distribution alignment or learning domain invariant features by adversarial techniques, ignoring feature processing and intra-domain category information. To this end, we design a new cross-domain discrepancy metric, namely joint distribution for maximum mean discrepancy (JD-MMD), and propose a deep unsupervised domain adaptation learning method, namely joint bi-adversarial learning for unsupervised domain adaptation (JBL-UDA). Specifically, JD-MMD measures cross-domain divergence in terms of both discrepancy and relevance by preserving cross-domain joint distribution discrepancy, as well as their class discriminability. Then, with such divergence measure, JBL-UDA models with two learning modalities, one is founded by the bi-adversarial learning from domains and classes implicitly, while the other explicitly addresses domains and classes alignment via the JD-MMD metric. Besides, JBL-UDA explores structural prior knowledge from data classes and domains to generate class-discriminative and domain-invariant representations. Finally, extensive evaluations exhibit state-of-the-art accuracy of the proposed methodology.

论文关键词:Unsupervised domain adaptation,Adversarial learning,Prior knowledge,Joint distribution discrepancy,Joint bi-adversarial learning

论文评审过程:Received 21 December 2021, Revised 21 April 2022, Accepted 22 April 2022, Available online 30 April 2022, Version of Record 12 May 2022.

论文官网地址:https://doi.org/10.1016/j.knosys.2022.108903