A particle swarm optimizer with multi-level population sampling and dynamic p-learning mechanisms for large-scale optimization

作者:

Highlights:

摘要

Large-scale optimization, which has received much attention in recent years, is inherently a challenging problem. This paper proposes a particle swarm optimizer with multi-level population sampling and dynamic p-learning mechanisms to address the problem. The multi-level sampling mechanism in the proposed method is developed for supporting a balanced evolutionary search. The mechanism works by partitioning the particles of swarm into multi-levels based on their fitness before each generation of evolution. A subset of swarm is then dynamically sampled from the particles at various levels for evolution such that encouraging exploration at the beginning of evolution while exploitation towards the end of evolution, thus appropriately searching the space. The dynamic p-learning mechanism, on the other hand, is introduced to allow efficient particle learning while preserving the swarm diversity during evolution. In this mechanism, each particle is devised to learn from one of the top 100p% particles of the sub-swarm and the value of p associated with each particle is dynamically adjusted during evolution. By employing the above two mechanisms, the resulting method aims to appropriate search the solution space of large-scale global optimization problem for identifying the optimal or near-optimal solution. The performance of the proposed method has been evaluated on CEC’2010 and CEC’2013 benchmark suites for large-scale optimization and compared with related methods. Our results confirm the merits of the devised mechanisms in the proposed method. The results also show that our method can achieve a superior performance and outperform related methods.

论文关键词:Particle swarm optimization,Large-scale optimization,Population sampling,Particle learning strategy

论文评审过程:Received 10 January 2022, Revised 29 January 2022, Accepted 6 February 2022, Available online 12 February 2022, Version of Record 24 February 2022.

论文官网地址:https://doi.org/10.1016/j.knosys.2022.108382