A Modified Memory-Based Reinforcement Learning Method for Solving POMDP Problems

作者:Lei Zheng, Siu-Yeung Cho

摘要

Partially observable Markov decision processes (POMDP) provide a mathematical framework for agent planning under stochastic and partially observable environments. The classic Bayesian optimal solution can be obtained by transforming the problem into Markov decision process (MDP) using belief states. However, because the belief state space is continuous and multi-dimensional, the problem is highly intractable. Many practical heuristic based methods are proposed, but most of them require a complete POMDP model of the environment, which is not always practical. This article introduces a modified memory-based reinforcement learning algorithm called modified U-Tree that is capable of learning from raw sensor experiences with minimum prior knowledge. This article describes an enhancement of the original U-Tree’s state generation process to make the generated model more compact, and also proposes a modification of the statistical test for reward estimation, which allows the algorithm to be benchmarked against some traditional model-based algorithms with a set of well known POMDP problems.

论文关键词:Memory-based reinforcement learning, Markov decision processes, Partially observable Markov decision processes, Reinforcement learning

论文评审过程:

论文官网地址:https://doi.org/10.1007/s11063-011-9172-2