Label Disentangled Analysis for unsupervised visual domain adaptation

作者:

Highlights:

摘要

Labeling a large number of samples in machine learning is often a prohibitively time-consuming task. Unsupervised domain adaptation (UDA) aims to learn a model to classify or represent the target domain by borrowing sufficiently labeled data from semantic adjacent but differently distributed source domains. However, existing DA models ignore an essential problem, i.e. the label dominated supervision signal easily makes the learning model/network tend to memorize the data, but not to learn essential image features, which makes DA unexplainable on what makes transfer a success. With this motivation, we have a core idea to disentangle/erase the label information from image features for easier DA task. To that end, this paper for the first time proposes the “label-erased” and “label-reconstructed” feature distributions by establishing their bidirectional mapping, and models them respectively for domain adaptation with the philosophy of divide and conquer. Specifically, in this paper, we propose a Label Disentangled Analysis (LDA) approach, consisting of three novel parts: (1) disentangling (decoupling) the labels for label-erased (label-irrelevant) features from domain data, (2) aligning the label-erased image features and label-reconstructed features between domains, and (3) cross-associating the aligned image and label features together to ensure the class discrimination of data. The above process can be summarized as three actions: label disentanglement (decoupling), feature-label joint adaptation (alignment), and feature-label cross association (cross-coupling). Extensive experiments verify that the proposed LDA outperforms the SOTAs on a number of benchmarks and is also comparable to several advanced deep transfer methods.

论文关键词:Label Disentanglement,Subspace learning,Domain disparity,Domain adaptation

论文评审过程:Received 22 February 2021, Revised 8 June 2021, Accepted 14 July 2021, Available online 20 July 2021, Version of Record 24 July 2021.

论文官网地址:https://doi.org/10.1016/j.knosys.2021.107309