On Average Versus Discounted Reward Temporal-Difference Learning

作者:John N. Tsitsiklis, Benjamin Van Roy

摘要

We provide an analytical comparison between discounted and average reward temporal-difference (TD) learning with linearly parameterized approximations. We first consider the asymptotic behavior of the two algorithms. We show that as the discount factor approaches 1, the value function produced by discounted TD approaches the differential value function generated by average reward TD. We further argue that if the constant function—which is typically used as one of the basis functions in discounted TD—is appropriately scaled, the transient behaviors of the two algorithms are also similar. Our analysis suggests that the computational advantages of average reward TD that have been observed in some prior empirical work may have been caused by inappropriate basis function scaling rather than fundamental differences in problem formulations or algorithms.

论文关键词:average reward, dynamic programming, function approximation, temporal-difference learning

论文评审过程:

论文官网地址:https://doi.org/10.1023/A:1017980312899