Multi-modal emotion analysis from facial expressions and electroencephalogram

作者:

Highlights:

摘要

Automatic analysis of human spontaneous behavior has attracted increasing attention in recent years from researchers in computer vision. This paper proposes an approach for multi-modal video-induced emotion recognition, based on facial expression and electroencephalogram (EEG) technologies. Spontaneous facial expression is utilized as an external channel. A new feature, formed by percentage of nine facial expressions, is proposed for analyzing the valence and arousal classes. Furthermore, EEG is used as an internal channel supplementing facial expressions for more reliable emotion recognition. Discriminative spectral power and spectral power difference features are exploited for EEG analysis. Finally, these two channels are fused on feature-level and decision-level for multi-modal emotion recognition. Experiments are conducted on MAHNOB-HCI database, including 522 spontaneous facial expression videos and EEG signals from 27 participants. Moreover, human perception in emotion recognition compared to the proposed approach is tested with 10 volunteers. The experimental results and comparisons with the average human performance show the effectiveness of the proposed multi-modal approach.

论文关键词:

论文评审过程:Received 26 March 2015, Revised 23 September 2015, Accepted 29 September 2015, Available online 17 May 2016, Version of Record 17 May 2016.

论文官网地址:https://doi.org/10.1016/j.cviu.2015.09.015