Abstract
We study the problem of learning one-hidden-layer neural networks with Recti-fied Linear Unit (ReLU) activation function, where the inputs are sampled from standard Gaussian distribution and the outputs are generated from a noisy teacher network. We analyze the performance of gradient descent for training such kind of neural networks based on empirical risk minimization, and provide algorithm-dependent guarantees. In particular, we prove that tensor initialization followed by gradient descent can converge to the ground-truth parameters at a linear rate up to some statistical error. To the best of our knowledge, this is the first work char-acterizing the recovery guarantee for practi-cal learning of one-hidden-layer ReLU net-works with multiple neurons. Numerical ex-periments verify our theoretical findings.
| Original language | English (US) |
|---|---|
| Pages (from-to) | 1524-1534 |
| Number of pages | 11 |
| Journal | Proceedings of Machine Learning Research |
| Volume | 89 |
| State | Published - 2019 |
| Externally published | Yes |
| Event | 22nd International Conference on Artificial Intelligence and Statistics, AISTATS 2019 - Naha, Japan Duration: Apr 16 2019 → Apr 18 2019 |
All Science Journal Classification (ASJC) codes
- Software
- Control and Systems Engineering
- Statistics and Probability
- Artificial Intelligence