The loss from imperfect value functions in expectation-based and minimax-based tasks

作者:Matthias Heger

摘要

Many reinforcement learning (RL) algorithms approximate an optimal value function. Once the function is known, it is easy to determine an optimal policy. For most real-world applications, however, the value function is too complex to be represented by lookup tables, making it necessary to use function approximators such as neural networks. In this case, convergence to the optimal value function is no longer guaranteed and it becomes important to know to which extent performance diminishes when one uses approximate value functions instead of optimal ones. This problem has recently been discussed in the context of expectation based Markov decision problems. Our analysis generalizes this work to minimax-based Markov decision problems, yields new results for expectation-based tasks, and shows how minimax-based and expectation based Markov decision problems relate.

论文关键词:Reinforcement Learning, Dynamic Programming, Performance Bounds, Minimax Algorithms Q-Learning

论文评审过程:

论文官网地址:https://doi.org/10.1007/BF00114728