TY - GEN
T1 - A learning automata-based particle swarm optimization algorithm for noisy environment
AU - Zhang, Junqi
AU - Xu, Linwei
AU - Ma, Ji
AU - Zhou, Mengchu
N1 - Publisher Copyright:
© 2015 IEEE.
PY - 2015/9/10
Y1 - 2015/9/10
N2 - Particle Swarm Optimization (PSO) is an outstanding evolutionary algorithm designed to tackle various optimization problems. However, its performance deteriorates significantly in noisy environments. Some studies have addressed this issue by introducing a resampling method. Most existing methods allocate a fixed and predetermined budget of re-evaluations for every iteration, but cannot change the budget according to different environments adaptively. Our previous work proposed a PSO-LA to integrate PSO with a Learning Automaton (LA) variant. PSO-LA utilizes LA's flexible self-adaption and automatic learning capability to learn the budget allocation for each iteration. This work further improves PSO-LA by the introduction of a subset scheme based LA (subLA) into PSO to further increase the probability of correctly finding the best particle through the pursuit on the a subset of particles with better performance, yielding a new method called LAPSO. LAPSO does not record the historical global best solution but finds it from the subset learned by subLA to jump out of the trapped area that may have a false global best solution. It can also adaptively consume computing budgets for every particle per iteration and, accordingly, total iteration times. Through experiments on 20 large-scale benchmark functions subject to different levels of noise, this work convincingly shows that LAPSO outperforms the existing ones in both accuracy and convergence rate of the optimization problems in noisy environments.
AB - Particle Swarm Optimization (PSO) is an outstanding evolutionary algorithm designed to tackle various optimization problems. However, its performance deteriorates significantly in noisy environments. Some studies have addressed this issue by introducing a resampling method. Most existing methods allocate a fixed and predetermined budget of re-evaluations for every iteration, but cannot change the budget according to different environments adaptively. Our previous work proposed a PSO-LA to integrate PSO with a Learning Automaton (LA) variant. PSO-LA utilizes LA's flexible self-adaption and automatic learning capability to learn the budget allocation for each iteration. This work further improves PSO-LA by the introduction of a subset scheme based LA (subLA) into PSO to further increase the probability of correctly finding the best particle through the pursuit on the a subset of particles with better performance, yielding a new method called LAPSO. LAPSO does not record the historical global best solution but finds it from the subset learned by subLA to jump out of the trapped area that may have a false global best solution. It can also adaptively consume computing budgets for every particle per iteration and, accordingly, total iteration times. Through experiments on 20 large-scale benchmark functions subject to different levels of noise, this work convincingly shows that LAPSO outperforms the existing ones in both accuracy and convergence rate of the optimization problems in noisy environments.
UR - http://www.scopus.com/inward/record.url?scp=84963623736&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84963623736&partnerID=8YFLogxK
U2 - 10.1109/CEC.2015.7256885
DO - 10.1109/CEC.2015.7256885
M3 - Conference contribution
AN - SCOPUS:84963623736
T3 - 2015 IEEE Congress on Evolutionary Computation, CEC 2015 - Proceedings
SP - 141
EP - 147
BT - 2015 IEEE Congress on Evolutionary Computation, CEC 2015 - Proceedings
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - IEEE Congress on Evolutionary Computation, CEC 2015
Y2 - 25 May 2015 through 28 May 2015
ER -