Bellman residuals minimization using online support vector machines

作者:Gennaro Esposito, Mario Martin

摘要

In this paper we present and theoretically study an Approximate Policy Iteration (API) method called A P I − B R M ? using a very effective implementation of incremental Support Vector Regression (SVR) to approximate the value function able to generalize Reinforcement Learning (RL) problems with continuous (or large) state space. A P I − B R M ? is presented as a non-parametric regularization method based on an outcome of the Bellman Residual Minimization (BRM) able to minimize the variance of the problem. The proposed method can be cast as incremental and may be applied to the on-line agent interaction framework of RL. Being also based on SVR which are based on convex optimization, is able to find the global solution of the problem. A P I − B R M ? using SVR can be seen as a regularization problem using ?−insensitive loss. Compared to standard squared loss also used in regularization, this allows to naturally build a sparse solution for the approximation function. We extensively analyze the statistical properties of A P I − B R M ? founding a bound which controls the performance loss of the algorithm under some assumptions on the kernel and assuming that the collected samples are not-i.i.d. following a β−mixing process. Some experimental evidence and performance for well known RL benchmarks are also presented.

论文关键词:Reinforcement learning, Support vector machine, Approximate policy iteration, Regularization, Regression

论文评审过程:

论文官网地址:https://doi.org/10.1007/s10489-017-0910-7