Decentralized MDPs with sparse interactions

作者:

Highlights:

摘要

Creating coordinated multiagent policies in environments with uncertainty is a challenging problem, which can be greatly simplified if the coordination needs are known to be limited to specific parts of the state space. In this work, we explore how such local interactions can simplify coordination in multiagent systems. We focus on problems in which the interaction between the agents is sparse and contribute a new decision-theoretic model for decentralized sparse-interaction multiagent systems, Dec-SIMDPs, that explicitly distinguishes the situations in which the agents in the team must coordinate from those in which they can act independently. We relate our new model to other existing models such as MMDPs and Dec-MDPs. We then propose a solution method that takes advantage of the particular structure of Dec-SIMDPs and provide theoretical error bounds on the quality of the obtained solution. Finally, we show a reinforcement learning algorithm in which independent agents learn both individual policies and when and how to coordinate. We illustrate the application of the algorithms throughout the paper in several multiagent navigation scenarios.

论文关键词:Multiagent coordination,Sparse interaction,Decentralized Markov decision processes

论文评审过程:Received 26 April 2010, Revised 29 April 2011, Accepted 7 May 2011, Available online 10 May 2011.

论文官网地址:https://doi.org/10.1016/j.artint.2011.05.001