A stochastic-gradient-descent-based Latent Factor Analysis (LFA) model is highly efficient in representative learning of a High-Dimensional and Sparse (HiDS) matrix. Its learning rate adaptation is vital in ensuring its efficiency. Such adaptation can be realized with an evolutionary computing algorithm. However, a resultant model tends to suffer from two issues: a) the pre-mature convergence of the swarm of learning rates as caused by an adopted evolution algorithm, and b) the pre-mature convergence of the LFA model as caused jointly by evolution-based learning rate adaptation and an optimization algorithm. This paper focuses on the methods to address such issues. A Hierarchical Particle-swarm-optimization-incorporated Latent factor analysis (HPL) model with a two-layered structure is proposed, where the first layer pre-trains desired latent factors with a position-transitional particle-swarm-optimization-based LFA model, and the second layer performs latent factor refining with a newly-proposed mini-batch particle swarm optimizer. With such design, an HPL model can well handle the pre-mature convergence, which is supported by the positive experimental results achieved on HiDS matrices from industrial applications.