Dirichlet Variational Autoencoder

作者:

Highlights:

• This paper is a study on Dirichlet prior in variational autoencoder.

• Our model outperforms baseline variational autoencoders in the perspective of loglikelihood.

• Our model produces more meaningful and interpretable latent representation with no component collapsing compared to baseline variational autoehcoders.

• Our model achieves the best classification accuracy in the (semi-)supervised classification tasks compared to baseline variational autoencoders.

• Our model shows better performances in topic model augmentation.

摘要

•This paper is a study on Dirichlet prior in variational autoencoder.•Our model outperforms baseline variational autoencoders in the perspective of loglikelihood.•Our model produces more meaningful and interpretable latent representation with no component collapsing compared to baseline variational autoehcoders.•Our model achieves the best classification accuracy in the (semi-)supervised classification tasks compared to baseline variational autoencoders.•Our model shows better performances in topic model augmentation.

论文关键词:Representation learning,Variational autoencoder,Deep generative model,Multi-modal latent representation,Component collapse

论文评审过程:Received 23 April 2019, Revised 4 October 2019, Accepted 22 June 2020, Available online 24 June 2020, Version of Record 3 July 2020.

论文官网地址:https://doi.org/10.1016/j.patcog.2020.107514