Integrating Classical Control into Reinforcement Learning Policy

作者:Ye Huang, Chaochen Gu, Xinping Guan

摘要

Deep reinforcement learning has made impressive advances in sequential decision making problems recently. Constructive reinforcement learning (RL) algorithms have been proposed to focus on the policy optimization process, while further research on the effect of different policy network has not been fully explored. MLPs, LSTMs and linear layer are complementary in their controlling capabilities, as MLPs are appropriate for global control, LSTMs are able to exploit history information and linear layer is good at stabilizing system dynamics. In this paper, we propose a “Proportional-Integral” (PI) neural network architecture that could be easily combined with popular optimization algorithms. This PI-patterned policy network exploits the advantages of integral control and linear control that are widely applied in classic control systems, based on which an ensemble-learning-based model is trained to further improve the sample efficiency and training performance on most RL tasks. Experimental results on public RL simulation platforms demonstrate the proposed architecture could achieve better performance than generally used MLP and other existing applied models.

论文关键词:Reinforcement learning, Deep learning, Neural network, Control theory

论文评审过程:

论文官网地址:https://doi.org/10.1007/s11063-019-10127-4