Learning classifier system with average reward reinforcement learning

作者:

Highlights:

摘要

In the family of Learning Classifier Systems, the classifier system XCS is most widely used and investigated. However, the standard XCS has difficulties solving large multi-step problems, where long action chains are needed to get delayed rewards. Up to the present, the reinforcement learning technique in XCS has been based on Q-learning, which optimizes the discounted total reward received by an agent but tends to limit the length of action chains. However, there are some undiscounted reinforcement learning methods available, such as R-learning and average reward reinforcement learning in general, which optimize the average reward per time step. In this paper, R-learning is used as the reinforcement learning employed by XCS, to replace Q-learning. The modification results in a classifier system that is rapid and able to solve large maze problems. In addition, it produces uniformly spaced payoff levels, which can support long action chains and thus effectively prevent the occurrence of overgeneralization.

论文关键词:Learning classifier systems,XCS,R-learning,Average reward,Reinforcement learning,Multi-step problems

论文评审过程:Received 4 June 2012, Revised 4 October 2012, Accepted 25 November 2012, Available online 5 December 2012.

论文官网地址:https://doi.org/10.1016/j.knosys.2012.11.011