Attention uncovers task-relevant semantics in emotional narrative understanding

作者:

Highlights:

摘要

Attention mechanisms in deep neural network models have helped them to achieve exceptional performance at complex natural language processing tasks. Previous attempts to investigate what these models have been “paying attention to” suggest that these attention representations capture syntactic information, but there is less evidence for semantics. In this paper, we investigate the capability of an attention mechanism to “attend to” semantically meaningful words. Using a dataset of naturalistic emotional narratives, we first build a Window-Based Attention (WBA) consisting of a hierarchical, two-level long short-term memory (LSTM) with softmax attention. Our model outperforms state-of-the-art models at predicting emotional valence, and even surpassing average human performance. Next, we show in detailed analyses, including word deletion experiments and visualizations, that words that receive higher attention weights in our model also tend to have greater emotional semantic meaning. Experimental results using six different pre-trained word embeddings suggest that deep neural network models which achieve human-level performance may learn to place greater attention weights on words that humans find semantically meaningful to the task at hand.

论文关键词:Explainable AI,Emotion understanding,Neural network attention

论文评审过程:Received 24 December 2020, Revised 1 April 2021, Accepted 17 May 2021, Available online 19 May 2021, Version of Record 24 May 2021.

论文官网地址:https://doi.org/10.1016/j.knosys.2021.107162