TY - JOUR
T1 - Efficient and High-quality Recommendations via Momentum-incorporated Parallel Stochastic Gradient Descent-Based Learning
AU - Luo, Xin
AU - Qin, Wen
AU - Dong, Ani
AU - Sedraoui, Khaled
AU - Zhou, Mengchu
N1 - Funding Information:
This work was supported in part by the National Natural Science Foundation of China (61772493), the Deanship of Scientific Research (DSR) at King Abdulaziz University (RG-48-135-40), Guangdong Province Universities and College Pearl River Scholar Funded Scheme (2019), and the Natural Science Foundation of Chongqing (cstc2019jcyjjqX0013).
Publisher Copyright:
© 2014 Chinese Association of Automation.
PY - 2021/2
Y1 - 2021/2
N2 - A recommender system (RS) relying on latent factor analysis usually adopts stochastic gradient descent (SGD) as its learning algorithm. However, owing to its serial mechanism, an SGD algorithm suffers from low efficiency and scalability when handling large-scale industrial problems. Aiming at addressing this issue, this study proposes a momentum-incorporated parallel stochastic gradient descent (MPSGD) algorithm, whose main idea is two-fold: a) implementing parallelization via a novel data-splitting strategy, and b) accelerating convergence rate by integrating momentum effects into its training process. With it, an MPSGD-based latent factor (MLF) model is achieved, which is capable of performing efficient and high-quality recommendations. Experimental results on four high-dimensional and sparse matrices generated by industrial RS indicate that owing to an MPSGD algorithm, an MLF model outperforms the existing state-of-the-art ones in both computational efficiency and scalability.
AB - A recommender system (RS) relying on latent factor analysis usually adopts stochastic gradient descent (SGD) as its learning algorithm. However, owing to its serial mechanism, an SGD algorithm suffers from low efficiency and scalability when handling large-scale industrial problems. Aiming at addressing this issue, this study proposes a momentum-incorporated parallel stochastic gradient descent (MPSGD) algorithm, whose main idea is two-fold: a) implementing parallelization via a novel data-splitting strategy, and b) accelerating convergence rate by integrating momentum effects into its training process. With it, an MPSGD-based latent factor (MLF) model is achieved, which is capable of performing efficient and high-quality recommendations. Experimental results on four high-dimensional and sparse matrices generated by industrial RS indicate that owing to an MPSGD algorithm, an MLF model outperforms the existing state-of-the-art ones in both computational efficiency and scalability.
KW - Big data
KW - industrial application
KW - industrial data
KW - latent factor analysis
KW - machine learning
KW - parallel algorithm
KW - recommender system (RS)
KW - stochastic gradient descent (SGD)
UR - http://www.scopus.com/inward/record.url?scp=85099413324&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85099413324&partnerID=8YFLogxK
U2 - 10.1109/JAS.2020.1003396
DO - 10.1109/JAS.2020.1003396
M3 - Article
AN - SCOPUS:85099413324
SN - 2329-9266
VL - 8
SP - 402
EP - 411
JO - IEEE/CAA Journal of Automatica Sinica
JF - IEEE/CAA Journal of Automatica Sinica
IS - 2
M1 - 9205688
ER -