Meta conditional variational auto-encoder for domain generalization

作者:

Highlights:

摘要

Domain generalization has recently generated increasing attention in machine learning in that it tackles the challenging out-of-distribution problem. The huge domain shift from source domains to target domains induces great uncertainty in making predictions on the target domains to which no data is accessible during learning. In this paper, we propose meta conditional variational auto-encoder (Meta-CVAE), a new meta probabilistic latent variable framework for domain generalization. The Meta-CVAE can better model the uncertainty across domains by inheriting the strong ability of probabilistic modeling from VAE. By leveraging the meta-learning framework to mimic the generalization from source to target domains during learning, our Meta-CVAE learns to acquire the capability of generalization by episodically transferring knowledge across domains. Meta-CVAE is optimized with a variational objective based on a newly derived evidence lower bound under the meta-learning setting. To further enhance prediction performance, we develop the Wasserstein Meta-CVAE by imposing a Wasserstein distance based discriminative constraint on the latent representations, which essentially separate different classes in the semantic space. Extensive experiments on diverse benchmarks demonstrate that our methods outperforms previous approaches consistently, and comprehensive ablation studies further validate its effectiveness on domain generalization.

论文关键词:

论文评审过程:Received 17 December 2021, Revised 18 June 2022, Accepted 28 June 2022, Available online 5 July 2022, Version of Record 11 July 2022.

论文官网地址:https://doi.org/10.1016/j.cviu.2022.103503