Sparse Coding and Autoencoders

Akshay Rangamani, Anirbit Mukherjee, Amitabh Basu, Ashish Arora, Tejaswini Ganapathi, Sang Chin, Trac D. Tran

Research output: Chapter in Book/Report/Conference proceedingConference contribution

7 Scopus citations

Abstract

In this work we study the landscape of squared loss of an Autoencoder when the data generative model is that of 'Sparse Coding'/'Dictionary Learning'. The neural net considered is an \mathbb{R}^{n}\rightarrow \mathbb{R}^{n} mapping and has a single ReLU activation layer of size h > n. The net has access to vectors y\in \mathbb{R}^{n} obtained as y=A^{\ast}x^{\ast} where x^{\ast}\in \mathbb{R}^{h} are sparse high dimensional vectors and A^{\ast}\in \mathbb{R}^{n\times h} is an overcomplete incoherent matrix. Under very mild distributional assumptions on x^{\ast}, we prove that the norm of the expected gradient of the squared loss function is asymptotically (in sparse code dimension) negligible for all points in a small neighborhood of A^{\ast}. This is supported with experimental evidence using synthetic data. We conduct experiments to suggest that A^{\ast} sits at the bottom of a well in the landscape and we also give experiments showing that gradient descent on this loss function gets columnwise very close to the original dictionary even with far enough initialization. Along the way we prove that a layer of ReLU gates can be set up to automatically recover the support of the sparse codes. Since this property holds independent of the loss function we believe that it could be of independent interest. A full version of this paper is accessible at: https://arxiv.org/abs/1708.03735

Original languageEnglish (US)
Title of host publication2018 IEEE International Symposium on Information Theory, ISIT 2018
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages36-40
Number of pages5
ISBN (Print)9781538647806
DOIs
StatePublished - Aug 15 2018
Externally publishedYes
Event2018 IEEE International Symposium on Information Theory, ISIT 2018 - Vail, United States
Duration: Jun 17 2018Jun 22 2018

Publication series

NameIEEE International Symposium on Information Theory - Proceedings
Volume2018-June
ISSN (Print)2157-8095

Other

Other2018 IEEE International Symposium on Information Theory, ISIT 2018
Country/TerritoryUnited States
CityVail
Period6/17/186/22/18

All Science Journal Classification (ASJC) codes

  • Theoretical Computer Science
  • Information Systems
  • Modeling and Simulation
  • Applied Mathematics

Fingerprint

Dive into the research topics of 'Sparse Coding and Autoencoders'. Together they form a unique fingerprint.

Cite this