Gated multi-attention representation in reinforcement learning

作者:

Highlights:

• Gated multi-attention module is proposed to eliminate task-irrelevant attentions.

• Our approach performs better than baselines in terms of scores and focusing effects.

• An end-to-end architecture including the multi-attention module is realized.

• Grad-CAM is used to visualize and verify the effects, code is available.

摘要

•Gated multi-attention module is proposed to eliminate task-irrelevant attentions.•Our approach performs better than baselines in terms of scores and focusing effects.•An end-to-end architecture including the multi-attention module is realized.•Grad-CAM is used to visualize and verify the effects, code is available.

论文关键词:Deep reinforcement learning,Gated multi-attention module,Deep Q-learning network,Atari 2600 games

论文评审过程:Received 30 June 2021, Revised 31 August 2021, Accepted 22 September 2021, Available online 24 September 2021, Version of Record 4 October 2021.

论文官网地址:https://doi.org/10.1016/j.knosys.2021.107535