A novel multi-step reinforcement learning method for solving reward hacking

作者:Yinlong Yuan, Zhu Liang Yu, Zhenghui Gu, Xiaoyan Deng, Yuanqing Li

摘要

Reinforcement learning with appropriately designed reward signal could be used to solve many sequential learning problems. However, in practice, the reinforcement learning algorithms could be broken in unexpected, counterintuitive ways. One of the failure modes is reward hacking which usually happens when a reward function makes the agent obtain high return in an unexpected way. This unexpected way may subvert the designer’s intentions and lead to accidents during training. In this paper, a new multi-step state-action value algorithm is proposed to solve the problem of reward hacking. Unlike traditional algorithms, the proposed method uses a new return function, which alters the discount of future rewards and no longer stresses the immediate reward as the main influence when selecting the current state action. The performance of the proposed method is evaluated on two games, Mappy and Mountain Car. The empirical results demonstrate that the proposed method can alleviate the negative impact of reward hacking and greatly improve the performance of reinforcement learning algorithm. Moreover, the results illustrate that the proposed method could also be applied to the continuous state space problem successfully.

论文关键词:Reinforcement learning, Robotics, Reward hacking , Multi-step methods

论文评审过程:

论文官网地址:https://doi.org/10.1007/s10489-019-01417-4