A novel network with multiple attention mechanisms for aspect-level sentiment analysis

作者:

Highlights:

摘要

Aspect-level sentiment analysis aims at identifying the sentiment polarity of specific aspect words in a given sentence. Existing studies mostly use recurrent neural network (RNN) -based models. However, truncated backpropagation, gradient vanishing, and exploration problems often occur during the training process. To address these issues, this paper proposed a novel network with multiple attention mechanisms for aspect-level sentiment analysis. First, we apply the bidirectional encoder representations from transformers (BERT) model to construct word embedding vectors. Second, multiple attention mechanisms, including intra- and inter-level attention mechanisms, are used to generate hidden state representations of a sentence. In the intra-level attention mechanism, multi-head self-attention and point-wise feed-forward structures are designed. In the inter-level attention mechanism, global attention is used to capture the interactive information between context and aspect words. Furthermore, a feature focus attention mechanism is proposed to enhance sentiment identification. Finally, several classic aspect-level sentiment analysis datasets are used to evaluate the performance of our model. Experiments demonstrate that the proposed model can achieve state-of-the-art results compared to baseline models.

论文关键词:Aspect-level sentiment analysis,Attention mechanism,Pre-trained BERT,Natural language processing

论文评审过程:Received 8 August 2020, Revised 1 June 2021, Accepted 2 June 2021, Available online 10 June 2021, Version of Record 10 June 2021.

论文官网地址:https://doi.org/10.1016/j.knosys.2021.107196