ExprADA: Adversarial domain adaptation for facial expression analysis

作者:

Highlights:

• The rationale behind our contributions is using visual domain adaptation for simulated face images to reduce the reality gap between simulation and reality. Doing so, we propose adversarial domain adaptation approach in the form of image-to-image translation task for the facial expression analysis.

• We focus on the data augmentation process and generate face images with a desired expression category to alleviate the common problem of class imbalance and introduce more variations into training data thus resulting in more robust classification system.

• We investigate the use of domain adaptation to transform the visual appearance of the images from the target domain (simulated faces) into source domain (real face images) without affecting the face details such as identity or expression properties. Therefore, the expression recognition model learned from the labeled source domain containing real face images with arbitrary head poses can be generalized to the translated images from unlabeled target domain containing frontal simulated face images, without the need for re-training a model for target domain.

• Compared to other variants of adversarial domain adaptation methods, we demonstrate that a better performance can be achieved through a proposed method using in-the-wild data for emotion recognition.

摘要

•The rationale behind our contributions is using visual domain adaptation for simulated face images to reduce the reality gap between simulation and reality. Doing so, we propose adversarial domain adaptation approach in the form of image-to-image translation task for the facial expression analysis.•We focus on the data augmentation process and generate face images with a desired expression category to alleviate the common problem of class imbalance and introduce more variations into training data thus resulting in more robust classification system.•We investigate the use of domain adaptation to transform the visual appearance of the images from the target domain (simulated faces) into source domain (real face images) without affecting the face details such as identity or expression properties. Therefore, the expression recognition model learned from the labeled source domain containing real face images with arbitrary head poses can be generalized to the translated images from unlabeled target domain containing frontal simulated face images, without the need for re-training a model for target domain.•Compared to other variants of adversarial domain adaptation methods, we demonstrate that a better performance can be achieved through a proposed method using in-the-wild data for emotion recognition.

论文关键词:Visual domain adaptation,Facial expression recognition,Adversarial learning

论文评审过程:Received 3 December 2018, Revised 3 October 2019, Accepted 13 November 2019, Available online 14 November 2019, Version of Record 28 November 2019.

论文官网地址:https://doi.org/10.1016/j.patcog.2019.107111