Domain Invariant and Agnostic Adaptation

作者:

Highlights:

摘要

Domain adaptation addresses the prediction problem in which the source and target data are sampled from different but related probability distributions. The key problem here lies in properly matching the distributions and learning general feature representation for training the prediction model. In this article, we introduce a Domain Invariant and Agnostic Adaptation (DIAA) solution, which matches the source and target joint distributions, and simultaneously aligns the feature and domain label joint distribution to its marginal product. In particular, DIAA matches and aligns the distributions via a feature transformation, and compares the two kinds of distribution disparities uniformly under the Kullback–Leibler (KL) divergence. To approximate the two corresponding KL divergences from observed samples, we derive a linear-regression-like technique that fits linear models to different ratio functions under the quadratic loss. With the estimated KL divergences, learning the DIAA feature transformation is formulated as solving a Grassmannian minimization problem. Experiments on text and image classification tasks with varied nature demonstrate the success of our approach.

论文关键词:Domain adaptation,KL divergence,Distribution matching,Riemannian manifold

论文评审过程:Received 12 January 2021, Revised 1 May 2021, Accepted 2 June 2021, Available online 11 June 2021, Version of Record 11 June 2021.

论文官网地址:https://doi.org/10.1016/j.knosys.2021.107192