TY - GEN
T1 - Annealed sparsity via adaptive and dynamic shrinking
AU - Zhang, Kai
AU - Zhe, Shandian
AU - Cheng, Chaoran
AU - Wei, Zhi
AU - Chen, Zhengzhang
AU - Chen, Haifeng
AU - Jiang, Guofei
AU - Qi, Yuan
AU - Ye, Jieping
N1 - Publisher Copyright:
© 2016 ACM.
PY - 2016/8/13
Y1 - 2016/8/13
N2 - Sparse learning has received tremendous amount of interest in high-dimensional data analysis due to its model interpretability and the low-computational cost. Among the various techniques, adaptive 1-regularization is an effective framework to improve the convergence behaviour of the LASSO, by using varying strength of regularization across different features. In the meantime, the adaptive structure makes it very powerful in modelling grouped sparsity patterns as well, being particularly useful in high-dimensional multi-task problems. However, choosing an appropriate, global regularization weight is still an open problem. In this paper, inspired by the annealing technique in material science, we propose to achieve "annealed sparsity" by designing a dynamic shrinking scheme that simultaneously optimizes the regularization weights and model coefficients in sparse (multi-task) learning. The dynamic structures of our algorithm are twofold. Feature-wise ("spatially"), the regularization weights are updated interactively with model coefficients, allowing us to improve the global regularization structure. Iteration-wise ("temporally"), such interaction is coupled with gradually boosted ℓ1-regularization by adjust- ing an equality norm-constraint, achieving an "annealing" effect to further improve model selection. This renders interesting shrinking behaviour in the whole solution path. Our method competes favorably with state-of-the-art methods in sparse (multi-task) learning. We also apply it in expression quantitative trait loci analysis (eQTL), which gives useful biological insights in human cancer (melanoma) study.
AB - Sparse learning has received tremendous amount of interest in high-dimensional data analysis due to its model interpretability and the low-computational cost. Among the various techniques, adaptive 1-regularization is an effective framework to improve the convergence behaviour of the LASSO, by using varying strength of regularization across different features. In the meantime, the adaptive structure makes it very powerful in modelling grouped sparsity patterns as well, being particularly useful in high-dimensional multi-task problems. However, choosing an appropriate, global regularization weight is still an open problem. In this paper, inspired by the annealing technique in material science, we propose to achieve "annealed sparsity" by designing a dynamic shrinking scheme that simultaneously optimizes the regularization weights and model coefficients in sparse (multi-task) learning. The dynamic structures of our algorithm are twofold. Feature-wise ("spatially"), the regularization weights are updated interactively with model coefficients, allowing us to improve the global regularization structure. Iteration-wise ("temporally"), such interaction is coupled with gradually boosted ℓ1-regularization by adjust- ing an equality norm-constraint, achieving an "annealing" effect to further improve model selection. This renders interesting shrinking behaviour in the whole solution path. Our method competes favorably with state-of-the-art methods in sparse (multi-task) learning. We also apply it in expression quantitative trait loci analysis (eQTL), which gives useful biological insights in human cancer (melanoma) study.
KW - Adaptive LASSO
KW - Multi-task LASSO
KW - Reg-ularization path
KW - Sparse multi-task learning
KW - Sparse regression
UR - http://www.scopus.com/inward/record.url?scp=84985041113&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84985041113&partnerID=8YFLogxK
U2 - 10.1145/2939672.2939769
DO - 10.1145/2939672.2939769
M3 - Conference contribution
AN - SCOPUS:84985041113
T3 - Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining
SP - 1325
EP - 1334
BT - KDD 2016 - Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining
PB - Association for Computing Machinery
T2 - 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD 2016
Y2 - 13 August 2016 through 17 August 2016
ER -