Learning One-hidden-layer ReLU Networks via Gradient Descent

Research output: Contribution to journalConference articlepeer-review

6 Scopus citations

Abstract

We study the problem of learning one-hidden-layer neural networks with Recti-fied Linear Unit (ReLU) activation function, where the inputs are sampled from standard Gaussian distribution and the outputs are generated from a noisy teacher network. We analyze the performance of gradient descent for training such kind of neural networks based on empirical risk minimization, and provide algorithm-dependent guarantees. In particular, we prove that tensor initialization followed by gradient descent can converge to the ground-truth parameters at a linear rate up to some statistical error. To the best of our knowledge, this is the first work char-acterizing the recovery guarantee for practi-cal learning of one-hidden-layer ReLU net-works with multiple neurons. Numerical ex-periments verify our theoretical findings.

Original languageEnglish (US)
Pages (from-to)1524-1534
Number of pages11
JournalProceedings of Machine Learning Research
Volume89
StatePublished - 2019
Externally publishedYes
Event22nd International Conference on Artificial Intelligence and Statistics, AISTATS 2019 - Naha, Japan
Duration: Apr 16 2019Apr 18 2019

All Science Journal Classification (ASJC) codes

  • Software
  • Control and Systems Engineering
  • Statistics and Probability
  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Learning One-hidden-layer ReLU Networks via Gradient Descent'. Together they form a unique fingerprint.

Cite this