The synergy of double attention: Combine sentence-level and word-level attention for image captioning

作者:

Highlights:

摘要

The existing attention models of image captioning typically extract only word-level attention information, i.e., the attention mechanism extracts local attention information from the image to generate the current word, and lacks accurate image global information guidance. In this paper, we first propose an image captioning approach based on self-attention. Sentence-level attention information is extracted from the image through self-attention mechanism to represent the global image information needed to generate sentences. Furthermore, we propose a double attention model which combines sentence-level attention model with word-level attention model to generate more accurate captions. We implement supervision and optimization in the intermediate stage of the model to solve information interference problems. In addition, we perform two-stage training with reinforcement learning to optimize the evaluation metric of the model. Finally, we evaluated our model on three standard datasets, i.e., Flickr8k, Flickr30k and MSCOCO. Experimental results show that our double attention model can generate more accurate and richer captions, and outperforms many state-of-the-art image captioning approaches in various evaluation metrics.

论文关键词:

论文评审过程:Received 27 March 2019, Revised 11 August 2020, Accepted 17 August 2020, Available online 22 August 2020, Version of Record 26 August 2020.

论文官网地址:https://doi.org/10.1016/j.cviu.2020.103068