Learning to infer human attention in daily activities

作者:

Highlights:

• Different encoder-decoder architectures significantly affect the performance.

• The performance improves after considering the task encoding loss.

• The pretrain of the neural network contributes to better performance.

摘要

•Different encoder-decoder architectures significantly affect the performance.•The performance improves after considering the task encoding loss.•The pretrain of the neural network contributes to better performance.

论文关键词:Human attention,Deep neural network,Attentional objects

论文评审过程:Received 10 August 2019, Revised 18 February 2020, Accepted 24 February 2020, Available online 26 February 2020, Version of Record 4 March 2020.

论文官网地址:https://doi.org/10.1016/j.patcog.2020.107314