Human Attention in Visual Question Answering: Do Humans and Deep Networks Look at the Same Regions?

作者:

Highlights:

摘要

We conduct large-scale studies on ‘human attention’ in Visual Question Answering (VQA) to understand where humans choose to look to answer questions about images. We design and test multiple game-inspired novel attention-annotation interfaces that require the subject to sharpen regions of a blurred image to answer a question. Thus, we introduce the VQA-HAT (Human ATtention) dataset. We evaluate attention maps generated by state-of-the-art VQA models against human attention both qualitatively (via visualizations) and quantitatively (via rank-order correlation). Our experiments show that current attention models in VQA do not seem to be looking at the same regions as humans. Finally, we train VQA models with explicit attention supervision, and find that it improves VQA performance.

论文关键词:

论文评审过程:Received 17 September 2016, Revised 7 September 2017, Accepted 5 October 2017, Available online 14 October 2017, Version of Record 23 November 2017.

论文官网地址:https://doi.org/10.1016/j.cviu.2017.10.001