On the Convergence of Temporal-Difference Learning with Linear Function Approximation

作者:Vladislav Tadić

摘要

The asymptotic properties of temporal-difference learning algorithms with linear function approximation are analyzed in this paper. The analysis is carried out in the context of the approximation of a discounted cost-to-go function associated with an uncontrolled Markov chain with an uncountable finite-dimensional state-space. Under mild conditions, the almost sure convergence of temporal-difference learning algorithms with linear function approximation is established and an upper bound for their asymptotic approximation error is determined. The obtained results are a generalization and extension of the existing results related to the asymptotic behavior of temporal-difference learning. Moreover, they cover cases to which the existing results cannot be applied, while the adopted assumptions seem to be the weakest possible under which the almost sure convergence of temporal-difference learning algorithms is still possible to be demonstrated.

论文关键词:temporal-difference learning, reinforcement learning, neuro-dynamic programming, almost sure convergence, Markov chains, positive Harris recurrence

论文评审过程:

论文官网地址:https://doi.org/10.1023/A:1007609817671