Forgetful experience replay in hierarchical reinforcement learning from expert demonstrations

作者:

Highlights:

摘要

Deep reinforcement learning (RL) shows impressive results in complex gaming and robotic environments. These results are commonly achieved at the expense of huge computational costs and require an incredible number of episodes of interactions between the agent and the environment. Hierarchical methods and expert demonstrations are among the most promising approaches to improve the sample efficiency of reinforcement learning methods. In this paper, we propose a combination of methods that allow the agent to use low-quality demonstrations in complex vision-based environments with multiple related goals. Our Forgetful Experience Replay (ForgER) algorithm effectively handles expert data errors and reduces quality losses when adapting the action space and states representation to the agent’s capabilities. The proposed goal-oriented replay buffer structure allows the agent to automatically highlight sub-goals for solving complex hierarchical tasks in demonstrations. Our method has a high degree of versatility and can be integrated into various off-policy methods. The ForgER surpasses the existing state-of-the-art RL methods using expert demonstrations in complex environments. The solution based on our algorithm beats other solutions for the famous MineRL competition and allows the agent to demonstrate the behavior at the expert level.

论文关键词:Expert demonstrations,ForgER,Hierarchical reinforcement learning,Learning from demonstrations,Task-oriented augmentation,Goal-oriented reinforcement learning

论文评审过程:Received 11 October 2020, Revised 25 January 2021, Accepted 30 January 2021, Available online 12 February 2021, Version of Record 17 February 2021.

论文官网地址:https://doi.org/10.1016/j.knosys.2021.106844