Unsupervised domain adaptation: A multi-task learning-based method

作者:

Highlights:

摘要

This paper presents a new perspective to formulate unsupervised domain adaptation as a multi-task learning problem. This formulation removes the commonly used assumption in the classifier-based adaptation approach that a shared classifier exists for the same task in different domains. Specifically, the source task is to learn a linear classifier from the labelled source data and the target task is to learn a linear transform to cluster the unlabelled target data such that the original target data are mapped to a lower dimensional subspace where the geometric structure is preserved. The two tasks are jointly learned by enforcing the target transformation is close to the source classifier and the class distribution shift between domains is reduced in the meantime. Two novel classifier-based adaptation algorithms are proposed upon the formulation using Regularized Least Squares and Support Vector Machines respectively, in which unshared classifiers between the source and target domains are assumed and jointly learned to effectively deal with large domain shift. Experiments on both synthetic and real-world cross domain recognition tasks have shown that the proposed methods outperform several state-of-the-art unsupervised domain adaptation methods.

论文关键词:Unsupervised domain adaptation,Transfer learning,Object recognition,Digit recognition

论文评审过程:Received 6 December 2018, Revised 29 July 2019, Accepted 19 August 2019, Available online 22 August 2019, Version of Record 5 November 2019.

论文官网地址:https://doi.org/10.1016/j.knosys.2019.104975