Speech emotion recognition using amplitude modulation parameters and a combined feature selection procedure

作者:

Highlights:

摘要

Speech emotion recognition (SER) is a challenging framework in demanding human machine interaction systems. Standard approaches based on the categorical model of emotions reach low performance, probably due to the modelization of emotions as distinct and independent affective states. Starting from the recently investigated assumption on the dimensional circumplex model of emotions, SER systems are structured as the prediction of valence and arousal on a continuous scale in a two-dimensional domain. In this study, we propose the use of a PLS regression model, optimized according to specific features selection procedures and trained on the Italian speech corpus EMOVO, suggesting a way to automatically label the corpus in terms of arousal and valence. New speech features related to the speech amplitude modulation, caused by the slowly-varying articulatory motion, and standard features extracted from the pitch contour, have been included in the regression model. An average value for the coefficient of determination R2 of 0.72 (maximum value of 0.95 for fear and minimum of 0.60 for sadness) is obtained for the female model and a value for R2 of 0.81 (maximum value of 0.89 for anger and minimum value of 0.71 for joy) is obtained for the male model, over the seven primary emotions (including the neutral state).

论文关键词:Speech emotion recognition (SER),Circumplex model of emotions,Partial least square (PLS) regression,Pearson correlation coefficient,Pitch contour characterization,Audio signal modulation

论文评审过程:Received 19 December 2013, Revised 12 March 2014, Accepted 22 March 2014, Available online 2 April 2014.

论文官网地址:https://doi.org/10.1016/j.knosys.2014.03.019