Adaptive evolution strategy with ensemble of mutations for Reinforcement Learning

作者:

Highlights:

摘要

Evolving the weights of learning networks through evolutionary computation (neuroevolution) has proven scalable over a range of challenging Reinforcement Learning (RL) control tasks. However, similar to most black-box optimization problems, existing neuroevolution approaches require an additional adaptation process to effectively balance exploration and exploitation through the selection of sensitive hyper-parameters throughout the evolution process. Therefore, these methods are often plagued by the computation complexities of such adaptation processes which often rely on a number of sophisticatedly formulated strategy parameters. In this paper, Evolution Strategy (ES) with a simple yet efficient ensemble of mutation strategies is proposed. Specifically, two distinct mutation strategies coexist throughout the evolution process where each strategy is associated with its own population subset. Consequently, elites for generating a population of offspring are realized by co-evaluation of the combined population. Experiments on testbed of six (6) black-box optimization problems which are generated using a classical control problem and six (6) proven continuous RL agents demonstrate the efficiency of the proposed method in terms of faster convergence and scalability than the canonical ES. Furthermore, the proposed Adaptive Ensemble ES (AEES) shows an average of 5 - 10000x and 10 - 100x better sample complexity in low and high dimension problems, respectively than their associated base DRL agents.

论文关键词:Evolution strategy,Reinforcement Learning,Ensemble,Mutation strategy,Black-box optimization

论文评审过程:Received 6 December 2021, Revised 17 March 2022, Accepted 18 March 2022, Available online 24 March 2022, Version of Record 4 April 2022.

论文官网地址:https://doi.org/10.1016/j.knosys.2022.108624