Video sentiment analysis with bimodal information-augmented multi-head attention

作者:

Highlights:

摘要

Humans express feelings or emotions via different channels. Take language as an example, it entails different sentiments under different visual–acoustic contexts. To precisely understand human intentions as well as reduce the misunderstandings caused by ambiguity and sarcasm, we should consider multimodal signals including textual, visual and acoustic signals. The crucial challenge is to fuse different modalities of features for sentiment analysis. To effectively fuse the information carried by different modalities and better predict the sentiments, we design a novel multi-head attention based fusion network, which is inspired by the observations that the interactions between any two pair-wise modalities are different and they do not equally contribute to the final sentiment prediction. By assigning the acoustic–visual, acoustic–textual and visual–textual features with reasonable attention and exploiting a residual structure, we attend to attain the significant features. We conduct extensive experiments on four public multimodal datasets including one in Chinese and three in English. The results show that our approach outperforms the existing methods and can explain the contributions of bimodal interaction in multiple modalities.

论文关键词:Information fusion,Multi-head attention,Multimodality,Sentiment analysis

论文评审过程:Received 30 May 2021, Revised 29 October 2021, Accepted 31 October 2021, Available online 2 November 2021, Version of Record 11 November 2021.

论文官网地址:https://doi.org/10.1016/j.knosys.2021.107676