Approximate planning for bayesian hierarchical reinforcement learning

作者:Ngo Anh Vien, Hung Ngo, Sungyoung Lee, TaeChoong Chung

摘要

In this paper, we propose to use hierarchical action decomposition to make Bayesian model-based reinforcement learning more efficient and feasible for larger problems. We formulate Bayesian hierarchical reinforcement learning as a partially observable semi-Markov decision process (POSMDP). The main POSMDP task is partitioned into a hierarchy of POSMDP subtasks. Each subtask might consist of only primitive actions or hierarchically call other subtasks’ policies, since the policies of lower-level subtasks are considered as macro actions in higher-level subtasks. A solution for this hierarchical action decomposition is to solve lower-level subtasks first, then higher-level ones. Because each formulated POSMDP has a continuous state space, we sample from a prior belief to build an approximate model for them, then solve by using a recently introduced Monte Carlo Value Iteration with Macro-Actions solver. We name this method Monte Carlo Bayesian Hierarchical Reinforcement Learning. Simulation results show that our algorithm exploiting the action hierarchy performs significantly better than that of flat Bayesian reinforcement learning in terms of both reward, and especially solving time, in at least one order of magnitude.

论文关键词:Reinforcement learning, Bayesian model-based RL, Bayesian reinforcement learning, Model-based reinforcement learning, Partially observable Markov decision process (POMDP), Partially observable semi-MDP (POSDMP)

论文评审过程:

论文官网地址:https://doi.org/10.1007/s10489-014-0565-6