Neural Network Ensembles in Reinforcement Learning

作者:Stefan Faußer, Friedhelm Schwenker

摘要

The integration of function approximation methods into reinforcement learning models allows for learning state- and state-action values in large state spaces. Model-free methods, like temporal-difference or SARSA, yield good results for problems where the Markov property holds. However, methods based on a temporal-difference are known to be unstable estimators of the value functions, when used with function approximation. Such unstable behavior depends on the Markov chain, the discounting value and the chosen function approximator. In this paper, we propose a meta-algorithm to learn state- or state-action values in a neural network ensemble, formed by a committee of multiple agents. The agents learn from joint decisions. It is shown that the committee benefits from the diversity on the estimation of the values. We empirically evaluate our algorithm on a generalized maze problem and on SZ-Tetris. The empirical evaluations confirm our analytical results.

论文关键词:Neural network ensemble, Learning from unstable estimations of value functions, Reinforcement learning with function approximation, Large environments

论文评审过程:

论文官网地址:https://doi.org/10.1007/s11063-013-9334-5