Completely model-free RL-based consensus of continuous-time multi-agent systems

作者:

Highlights:

摘要

In this paper, we study the consensus of continuous-time general linear multi-agent systems in the absence of the model information by using the adaptive dynamic programming (ADP) based reinforcement learning (RL) approach. The introduction of the RL approach is to learn the feedback gain matrix to fulfill the construction of the control algorithm to guarantee the reach of consensus only on the basis of the available information. For the state feedback control, the RL algorithm relates only to the state and the input of an arbitrary agent, while for the output feedback control, the RL algorithm depends only on the input and output information of an arbitrary agent, irrelevant any model information. Finally, numerical simulations are given to verify the main results.

论文关键词:Continuous-time MAS,Model-free,Reinforcement learning,Output feedback

论文评审过程:Received 27 November 2019, Revised 19 February 2020, Accepted 12 April 2020, Available online 26 May 2020, Version of Record 26 May 2020.

论文官网地址:https://doi.org/10.1016/j.amc.2020.125312