Abstract
We propose a unified framework for estimating low-rank matrices through nonconvex optimization based on gradient descent algorithm. Our framework is quite general and can be applied to both noisy and noiseless observations. In the general case with noisy observations, we show that our algorithm is guaranteed to linearly converge to the unknown low-rank matrix up to a minimax optimal statistical error, provided an appropriate initial estimator. While in the generic noiseless setting, our algorithm converges to the unknown low-rank matrix at a linear rate and enables exact recovery with optimal sample complexity. In addition, we develop a new initialization algorithm to provide the desired initial estimator, which outperforms existing initialization algorithms for nonconvex low-rank matrix estimation. We illustrate the superiority of our framework through three examples: matrix regression, matrix completion, and one-bit matrix completion. We also corroborate our theory through extensive experiments on synthetic data.
| Original language | English (US) |
|---|---|
| State | Published - 2017 |
| Externally published | Yes |
| Event | 20th International Conference on Artificial Intelligence and Statistics, AISTATS 2017 - Fort Lauderdale, United States Duration: Apr 20 2017 → Apr 22 2017 |
Conference
| Conference | 20th International Conference on Artificial Intelligence and Statistics, AISTATS 2017 |
|---|---|
| Country/Territory | United States |
| City | Fort Lauderdale |
| Period | 4/20/17 → 4/22/17 |
All Science Journal Classification (ASJC) codes
- Artificial Intelligence
- Statistics and Probability