Tuning-parameter selection in regularized estimations of large covariance matrices

Yixin Fang, Binhuan Wang, Yang Feng

Research output: Contribution to journalArticlepeer-review

12 Scopus citations

Abstract

Recently many regularized estimators of large covariance matrices have been proposed, and the tuning parameters in these estimators are usually selected via cross-validation. However, there is a lack of consensus on the number of folds for conducting cross-validation. One round of cross-validation involves partitioning a sample of data into two complementary subsets, a training set and a validation set. In this manuscript, we demonstrate that if the estimation accuracy is measured in the Frobenius norm, the training set should consist of majority of the data; whereas if the estimation accuracy is measured in the operator norm, the validation set should consist of majority of the data. We also develop methods for selecting tuning parameters based on the bootstrap and compare them with their cross-validation counterparts. We demonstrate that the cross-validation methods with ‘optimal’ choices of folds are more appropriate than their bootstrap counterparts.

Original languageEnglish (US)
Pages (from-to)494-509
Number of pages16
JournalJournal of Statistical Computation and Simulation
Volume86
Issue number3
DOIs
StatePublished - Feb 11 2016
Externally publishedYes

All Science Journal Classification (ASJC) codes

  • Statistics and Probability
  • Modeling and Simulation
  • Statistics, Probability and Uncertainty
  • Applied Mathematics

Keywords

  • Frobenius norm
  • banding
  • bootstrap
  • covariance matrix
  • cross-validation
  • operator norm
  • thresholding

Fingerprint

Dive into the research topics of 'Tuning-parameter selection in regularized estimations of large covariance matrices'. Together they form a unique fingerprint.

Cite this