Abstract
We study the problem of learning one-hidden-layer neural networks with Rectified Linear Unit (ReLU) activation function, where the inputs are sampled from standard Gaussian distribution and the outputs are generated from a noisy teacher network. We analyze the performance of gradient descent for training such kind of neural networks based on empirical risk minimization, and provide algorithm-dependent guarantees. In particular, we prove that tensor initialization followed by gradient descent can converge to the ground-truth parameters at a linear rate up to some statistical error. To the best of our knowledge, this is the first work characterizing the recovery guarantee for practical learning of one-hidden-layer ReLU networks with multiple neurons. Numerical experiments verify our theoretical findings.
| Original language | English (US) |
|---|---|
| State | Published - 2019 |
| Externally published | Yes |
| Event | 22nd International Conference on Artificial Intelligence and Statistics, AISTATS 2019 - Naha, Japan Duration: Apr 16 2019 → Apr 18 2019 |
Conference
| Conference | 22nd International Conference on Artificial Intelligence and Statistics, AISTATS 2019 |
|---|---|
| Country/Territory | Japan |
| City | Naha |
| Period | 4/16/19 → 4/18/19 |
All Science Journal Classification (ASJC) codes
- Artificial Intelligence
- Statistics and Probability