Variable Selection via Penalized Neural Network: a Drop-Out-One Loss Approach

Research output: Contribution to journalConference articlepeer-review

13 Scopus citations

Abstract

We propose a variable selection method for high dimensional regression models, which allows for complex, nonlinear, and high-order interactions among variables. The proposed method approximates this complex system using a penalized neural network and selects explanatory variables by measuring their utility in explaining the variance of the response variable. This measurement is based on a novel statistic called Drop-Out-One Loss. The proposed method also allows (over-lapping) group variable selection. We prove that the proposed method can select relevant variables and exclude irrelevant variables with probability one as the sample size goes to infinity, which is referred to as the Oracle Property. Experimental results on simulated and real world datasets show the efficiency of our method in terms of variable selection and prediction accuracy.

Original languageEnglish (US)
Pages (from-to)5620-5629
Number of pages10
JournalProceedings of Machine Learning Research
Volume80
StatePublished - 2018
Externally publishedYes
Event35th International Conference on Machine Learning, ICML 2018 - Stockholm, Sweden
Duration: Jul 10 2018Jul 15 2018

All Science Journal Classification (ASJC) codes

  • Software
  • Control and Systems Engineering
  • Statistics and Probability
  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Variable Selection via Penalized Neural Network: a Drop-Out-One Loss Approach'. Together they form a unique fingerprint.

Cite this