Modeling label-wise syntax for fine-grained sentiment analysis of reviews via memory-based neural model

作者:

Highlights:

摘要

Fine-grained sentiment analysis has shown great benefits to real-word applications, such as for social media texts and product reviews. While the current state-of-the-art methods employ external syntactic dependency knowledge and enhance the task performances, most of them make use of merely the dependency edges, leaving the dependency labels unexploited, which the work presented here shows to be also of great helpfulness to the task. In this study we leverage these syntactic features for improving fine-grained sentiment analysis. Compared to previous studies, our method advances following aspects. First, we are the first to propose a novel label-wise syntax memory (LSM) network for simultaneously encoding both the syntactic dependency edges and labels information in a unified manner. Additionally, we take the advantage of the current state-of-the-art contextualized BERT language models to provide rich contexts towards the targeted aspects. We conduct experiments on five benchmark datasets, and the results demonstrate that our model outperforms current best-performing baselines, and achieves new state-of-the-art performances. Further analysis is conducted, proving the necessity to encode sufficient syntactic dependency knowledge for the task, also illustrating the effectiveness of our LSM encoder on modeling these syntax attributes. By exploiting rich syntactic information, our framework outperforms baselines in identifying multiple aspects of sentiment analysis as well as the long-range dependency issues.

论文关键词:Text mining,Natural language processing,Sentiment analysis,Syntax knowledge,Deep learning,Memory mechanism

论文评审过程:Received 23 December 2020, Revised 11 May 2021, Accepted 16 May 2021, Available online 25 May 2021, Version of Record 25 May 2021.

论文官网地址:https://doi.org/10.1016/j.ipm.2021.102641