An efficient L2-norm regularized least-squares temporal difference learning algorithm

作者:

Highlights:

摘要

In reinforcement learning, when samples are limited in some real applications, Least-Squares Temporal Difference (LSTD) learning is prone to over-fitting, which can be overcome by the introduction of regularization. However, the solution of LSTD with regularization still depends on costly matrix inversion operations. In this paper we investigate the L2-norm regularized LSTD learning and propose an efficient algorithm to avoid expensive computational cost. We derive LSTD using Bellman operator along with projection operator. The L2-norm penalty is introduced to avoid over-fitting. We also describe the difference between Bellman residual minimization and LSTD. Then we propose an efficient recursive least-squares algorithm for L2-norm regularized LSTD, which can eliminate matrix inversion operations and decrease computational complexity effectively. We present empirical comparisons on the Boyan chain problem. The results show that the performance of the new algorithm is better than that of regularized LSTD.

论文关键词:Reinforcement learning,Temporal difference,Recursive least-squares,Bellman residual minimizations,Regularization

论文评审过程:Received 18 June 2012, Revised 16 February 2013, Accepted 17 February 2013, Available online 27 February 2013.

论文官网地址:https://doi.org/10.1016/j.knosys.2013.02.010