Efficient and High-quality Recommendations via Momentum-incorporated Parallel Stochastic Gradient Descent-Based Learning

Xin Luo, Wen Qin, Ani Dong, Khaled Sedraoui, Mengchu Zhou

Research output: Contribution to journalArticlepeer-review

3 Scopus citations

Abstract

A recommender system (RS) relying on latent factor analysis usually adopts stochastic gradient descent (SGD) as its learning algorithm. However, owing to its serial mechanism, an SGD algorithm suffers from low efficiency and scalability when handling large-scale industrial problems. Aiming at addressing this issue, this study proposes a momentum-incorporated parallel stochastic gradient descent (MPSGD) algorithm, whose main idea is two-fold: a) implementing parallelization via a novel data-splitting strategy, and b) accelerating convergence rate by integrating momentum effects into its training process. With it, an MPSGD-based latent factor (MLF) model is achieved, which is capable of performing efficient and high-quality recommendations. Experimental results on four high-dimensional and sparse matrices generated by industrial RS indicate that owing to an MPSGD algorithm, an MLF model outperforms the existing state-of-the-art ones in both computational efficiency and scalability.

Original languageEnglish (US)
Article number9205688
Pages (from-to)402-411
Number of pages10
JournalIEEE/CAA Journal of Automatica Sinica
Volume8
Issue number2
DOIs
StatePublished - Feb 2021

All Science Journal Classification (ASJC) codes

  • Control and Systems Engineering
  • Information Systems
  • Artificial Intelligence

Keywords

  • Big data
  • industrial application
  • industrial data
  • latent factor analysis
  • machine learning
  • parallel algorithm
  • recommender system (RS)
  • stochastic gradient descent (SGD)

Fingerprint Dive into the research topics of 'Efficient and High-quality Recommendations via Momentum-incorporated Parallel Stochastic Gradient Descent-Based Learning'. Together they form a unique fingerprint.

Cite this