Quantifying the effects of environment and population diversity in multi-agent reinforcement learning

作者:Kevin R. McKee, Joel Z. Leibo, Charlie Beattie, Richard Everett

摘要

Generalization is a major challenge for multi-agent reinforcement learning. How well does an agent perform when placed in novel environments and in interactions with new co-players? In this paper, we investigate and quantify the relationship between generalization and diversity in the multi-agent domain. Across the range of multi-agent environments considered here, procedurally generating training levels significantly improves agent performance on held-out levels. However, agent performance on the specific levels used in training sometimes declines as a result. To better understand the effects of co-player variation, our experiments introduce a new environment-agnostic measure of behavioral diversity. Results demonstrate that population size and intrinsic motivation are both effective methods of generating greater population diversity. In turn, training with a diverse set of co-players strengthens agent performance in some (but not all) cases.

论文关键词:Machine learning, Deep reinforcement learning, Multi-agent, Diversity

论文评审过程:

论文官网地址:https://doi.org/10.1007/s10458-022-09548-8