TY - JOUR
T1 - A Momentum-Accelerated Hessian-Vector-Based Latent Factor Analysis Model
AU - Li, Weiling
AU - Luo, Xin
AU - Yuan, Huaqiang
AU - Zhou, Meng Chu
N1 - Funding Information:
This work was supported in part by the National Natural Science Foundation of China under Grant 62102086, in part by the Guangdong Province Universities and College Pearl River Scholar Funded Scheme (2019), in part by the Guangdong Basic and Applied Basic Research Foundation under Grant 2019A1515111058, in part by the China Postdoctoral Science Foundation funded project under Grant 2020M683293, in part by the CAAIHuawei MindSpore Open Fund under Grant CAAIXSJLJJ-2021-035A, and in part by FDCT (Fundo para o Desenvolvimento das Ciencias e da Tecnologia) under Grant 0047/2021/A1.
Publisher Copyright:
© 2008-2012 IEEE.
PY - 2023/3/1
Y1 - 2023/3/1
N2 - Service-oriented applications commonly involve high-dimensional and sparse (HiDS) interactions among users and service-related entities, e.g., user-item interactions from a personalized recommendation services system. How to perform precise and efficient representation learning on such HiDS interactions data is a hot yet thorny issue. An efficient approach to it is latent factor analysis (LFA), which commonly depends on large-scale non-convex optimization. Hence, it is vital to implement an LFA model able to approximate second-order stationary points efficiently for enhancing its representation learning ability. However, existing second-order LFA models suffer from high computational cost, which significantly reduces its practicability. To address this issue, this paper presents a Momentum-accelerated Hessian-vector algorithm (MH) for precise and efficient LFA on HiDS data. Its main ideas are two-fold: a) adopting the principle of a Hessian-vector-product-based method to utilize the second-order information without manipulating a Hessian matrix directly, and b) incorporating a generalized momentum method into its parameter learning scheme for accelerating its convergence rate to a stationary point. Experimental results on nine industrial datasets demonstrate that compared with state-of-the-art LFA models, an MH-based LFA model achieves gains in both accuracy and convergence rate. These positive outcomes also indicate that a generalized momentum method is compatible with the algorithms, e.g., a second-order algorithm, which implicitly rely on gradients.
AB - Service-oriented applications commonly involve high-dimensional and sparse (HiDS) interactions among users and service-related entities, e.g., user-item interactions from a personalized recommendation services system. How to perform precise and efficient representation learning on such HiDS interactions data is a hot yet thorny issue. An efficient approach to it is latent factor analysis (LFA), which commonly depends on large-scale non-convex optimization. Hence, it is vital to implement an LFA model able to approximate second-order stationary points efficiently for enhancing its representation learning ability. However, existing second-order LFA models suffer from high computational cost, which significantly reduces its practicability. To address this issue, this paper presents a Momentum-accelerated Hessian-vector algorithm (MH) for precise and efficient LFA on HiDS data. Its main ideas are two-fold: a) adopting the principle of a Hessian-vector-product-based method to utilize the second-order information without manipulating a Hessian matrix directly, and b) incorporating a generalized momentum method into its parameter learning scheme for accelerating its convergence rate to a stationary point. Experimental results on nine industrial datasets demonstrate that compared with state-of-the-art LFA models, an MH-based LFA model achieves gains in both accuracy and convergence rate. These positive outcomes also indicate that a generalized momentum method is compatible with the algorithms, e.g., a second-order algorithm, which implicitly rely on gradients.
KW - generalized momentum method
KW - hessian-vector
KW - high-dimensional and sparse data
KW - latent factor analysis
KW - machine learning
KW - recommendation service
KW - representation learning
KW - second-order optimization
KW - service application
KW - Services computing
UR - http://www.scopus.com/inward/record.url?scp=85131756449&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85131756449&partnerID=8YFLogxK
U2 - 10.1109/TSC.2022.3177316
DO - 10.1109/TSC.2022.3177316
M3 - Article
AN - SCOPUS:85131756449
SN - 1939-1374
VL - 16
SP - 830
EP - 844
JO - IEEE Transactions on Services Computing
JF - IEEE Transactions on Services Computing
IS - 2
ER -