Estimation and Approximation Bounds for Gradient-Based Reinforcement Learning

作者:

Highlights:

摘要

We model reinforcement learning as the problem of learning to control a partially observable Markov decision process (POMDP) and focus on gradient ascent approaches to this problem. In an earlier work (2001, J. Artificial Intelligence Res.14) we introduced GPOMDP, an algorithm for estimating the performance gradient of a POMDP from a single sample path, and we proved that this algorithm almost surely converges to an approximation to the gradient. In this paper, we provide a convergence rate for the estimates produced by GPOMDP and give an improved bound on the approximation error of these estimates. Both of these bounds are in terms of mixing times of the POMDP.

论文关键词:

论文评审过程:Received 17 November 2000, Revised 12 June 2001, Available online 25 May 2002.

论文官网地址:https://doi.org/10.1006/jcss.2001.1793