Abstract
A Stochastic Gradient Descent (SGD)-based Latent Factor Analysis (LFA) model is highly efficient in representative learning on a High-Dimensional and Sparse (HiDS) matrix, where the learning rate adaptation is vital in its efficiency and practicability. The learning rate adaptation of an SGD-based LFA model can be achieved efficiently by learning rate evolution with an evolutionary computing algorithm. However, a resultant model commonly suffers from twofold premature convergence issues, i.e., a) the premature convergence of the learning rate swarm relying on an evolution algorithm, and b) the premature convergence of an LFA model relying on the compound effects of evolution-based learning rate adaptation and adopted optimization algorithm. Aiming at addressed such issues, this work proposes an Hierarchical Particle swarm optimization-incorporated Latent factor analysis (HPL) model with a two-layered structure. The first layer pre-trains desired latent factors with a position-transitional particle swarm optimization-based LFA model with learning rate adaptation; while the second layer performs latent factor refinement with a newly-proposed mini-batch particle swarm optimization algorithm. Experimental results on four HiDS matrices generated by industrial applications demonstrate that an HPL model can well handle the mentioned premature convergence issues, thereby achieving highly-accurate representation to HiDS matrices.
Original language | English (US) |
---|---|
Pages (from-to) | 1524-1536 |
Number of pages | 13 |
Journal | IEEE Transactions on Big Data |
Volume | 8 |
Issue number | 6 |
DOIs | |
State | Published - Dec 1 2022 |
All Science Journal Classification (ASJC) codes
- Information Systems
- Information Systems and Management
Keywords
- Big data
- high-dimensional and sparse matrix
- industrial application
- large-scale incomplete data
- latent factor analysis
- missing data estimation
- particle swarm optimization