TY - GEN
T1 - Accurate Latent Factor Analysis via Particle Swarm Optimizers
AU - Chen, Jia
AU - Luo, Xin
AU - Zhou, Mengchu
N1 - Funding Information:
*This research is supported in part by the National Natural Science Foundation of China under grant 61772493, in part by the Natural Science Foundation of Chongqing (China) under grants cstc2019jcyjjqX0013, and in part by Chongqing Research Program of Technology Innovation and Application under grants cstc2019jscx-fxydX0024, cstc2019jscx-fxydX0027 and cstc2018jszx-cyzdX0041. (Corresponding author: MengChu Zhou, Xin Luo).
Publisher Copyright:
© 2021 IEEE.
PY - 2021
Y1 - 2021
N2 - A stochastic-gradient-descent-based Latent Factor Analysis (LFA) model is highly efficient in representative learning of a High-Dimensional and Sparse (HiDS) matrix. Its learning rate adaptation is vital in ensuring its efficiency. Such adaptation can be realized with an evolutionary computing algorithm. However, a resultant model tends to suffer from two issues: a) the pre-mature convergence of the swarm of learning rates as caused by an adopted evolution algorithm, and b) the pre-mature convergence of the LFA model as caused jointly by evolution-based learning rate adaptation and an optimization algorithm. This paper focuses on the methods to address such issues. A Hierarchical Particle-swarm-optimization-incorporated Latent factor analysis (HPL) model with a two-layered structure is proposed, where the first layer pre-trains desired latent factors with a position-transitional particle-swarm-optimization-based LFA model, and the second layer performs latent factor refining with a newly-proposed mini-batch particle swarm optimizer. With such design, an HPL model can well handle the pre-mature convergence, which is supported by the positive experimental results achieved on HiDS matrices from industrial applications.
AB - A stochastic-gradient-descent-based Latent Factor Analysis (LFA) model is highly efficient in representative learning of a High-Dimensional and Sparse (HiDS) matrix. Its learning rate adaptation is vital in ensuring its efficiency. Such adaptation can be realized with an evolutionary computing algorithm. However, a resultant model tends to suffer from two issues: a) the pre-mature convergence of the swarm of learning rates as caused by an adopted evolution algorithm, and b) the pre-mature convergence of the LFA model as caused jointly by evolution-based learning rate adaptation and an optimization algorithm. This paper focuses on the methods to address such issues. A Hierarchical Particle-swarm-optimization-incorporated Latent factor analysis (HPL) model with a two-layered structure is proposed, where the first layer pre-trains desired latent factors with a position-transitional particle-swarm-optimization-based LFA model, and the second layer performs latent factor refining with a newly-proposed mini-batch particle swarm optimizer. With such design, an HPL model can well handle the pre-mature convergence, which is supported by the positive experimental results achieved on HiDS matrices from industrial applications.
KW - Big Data
KW - High-dimensional and Sparse Matrix
KW - Industrial Application
KW - Large-Scale Incomplete Data
KW - Latent Factor Analysis (LFA)
KW - Machine Learning
KW - Missing Data Estimation
KW - Particle Swarm Optimization (PSO)
UR - http://www.scopus.com/inward/record.url?scp=85124266741&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85124266741&partnerID=8YFLogxK
U2 - 10.1109/SMC52423.2021.9659218
DO - 10.1109/SMC52423.2021.9659218
M3 - Conference contribution
AN - SCOPUS:85124266741
T3 - Conference Proceedings - IEEE International Conference on Systems, Man and Cybernetics
SP - 2930
EP - 2935
BT - 2021 IEEE International Conference on Systems, Man, and Cybernetics, SMC 2021
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2021 IEEE International Conference on Systems, Man, and Cybernetics, SMC 2021
Y2 - 17 October 2021 through 20 October 2021
ER -