Abstract
Non-negative latent factor (NLF) models well represent high-dimensional and sparse (HiDS) matrices filled with non-negative data, which are frequently encountered in industrial applications like recommender systems. However, current NLF models mostly adopt Euclidean distance in their objective function, which represents a special case of a {{β }} -divergence function. Hence, it is highly desired to design a {{β }} -divergence-based NLF ( {{β }} -NLF) model that uses a {{β }} -divergence function, and investigate its performance in recommender systems as {{β }} varies. To do so, we first model {{β }} -NLF's learning objective with a {{β }} -divergence function. Subsequently, we deduce a general single latent factor-dependent, non-negative and multiplicative update scheme for {{β }} -NLF, and then design an efficient {{β }} -NLF algorithm. The experimental results on HiDS matrices from industrial applications indicate that by carefully choosing the value of {{β }} , {{β }} -NLF outperforms an NLF model with Euclidean distance in terms of accuracy for missing data prediction without increasing computational time. The research outcomes show the necessity of using an optimal {{β }} -divergence function in order to achieve the best performance of an NLF model on HiDS matrices. Hence, the proposed model has both theoretical and application significance.
Original language | English (US) |
---|---|
Article number | 8809405 |
Pages (from-to) | 4612-4623 |
Number of pages | 12 |
Journal | IEEE Transactions on Systems, Man, and Cybernetics: Systems |
Volume | 51 |
Issue number | 8 |
DOIs | |
State | Published - Aug 2021 |
All Science Journal Classification (ASJC) codes
- Software
- Control and Systems Engineering
- Human-Computer Interaction
- Computer Science Applications
- Electrical and Electronic Engineering
Keywords
- big data
- high-dimensional and sparse (HiDS) matrix
- industrial application
- learning algorithm
- non-negative latent factor (NLF) analysis
- recommender system
- β-divergence