Bolasso

Bolasso: model consistent Lasso estimation through the bootstrap. We consider the least-square linear regression problem with regularization by the l1-norm, a problem usually referred to as the Lasso. In this paper, we present a detailed asymptotic analysis of model consistency of the Lasso. For various decays of the regularization parameter, we compute asymptotic equivalents of the probability of correct model selection (i.e., variable selection). For a specific rate decay, we show that the Lasso selects all the variables that should enter the model with probability tending to one exponentially fast, while it selects all other variables with strictly positive probability. We show that this property implies that if we run the Lasso for several bootstrapped replications of a given sample, then intersecting the supports of the Lasso bootstrap estimates leads to consistent model selection. This novel variable selection algorithm, referred to as the Bolasso, is compared favorably to other linear regression methods on synthetic data and datasets from the UCI machine learning repository.


References in zbMATH (referenced in 26 articles )

Showing results 1 to 20 of 26.
Sorted by year (citations)

1 2 next

  1. Bien, Jacob; Gaynanova, Irina; Lederer, Johannes; Müller, Christian L.: Prediction error bounds for linear regression with the TREX (2019)
  2. Obuchi, Tomoyuki; Kabashima, Yoshiyuki: Semi-analytic resampling in Lasso (2019)
  3. Champion, Magali; Picheny, Victor; Vignes, Matthieu: Inferring large graphs using (\ell_1)-penalized likelihood (2018)
  4. Devijver, Emilie: Joint rank and variable selection for parsimonious estimation in a high-dimensional finite mixture regression model (2017)
  5. Liu, Han; Wang, Lie: TIGER: A tuning-insensitive approach for optimally estimating Gaussian graphical models (2017)
  6. Amato, Umberto; Antoniadis, Anestis; De Feis, Italia: Additive model selection (2016)
  7. Bar-Hen, Avner; Poggi, Jean-Michel: Influence measures and stability for graphical models (2016)
  8. Bradic, Jelena: Randomized maximum-contrast selection: subagging for large-scale regression (2016)
  9. Latouche, Pierre; Mattei, Pierre-Alexandre; Bouveyron, Charles; Chiquet, Julien: Combining a relaxed EM algorithm with Occam’s razor for Bayesian variable selection in high-dimensional regression (2016)
  10. Perthame, Émeline; Friguet, Chloé; Causeur, David: Stability of feature selection in classification issues for high-dimensional correlated data (2016)
  11. Meinshausen, Nicolai; Bühlmann, Peter: Maximin effects in inhomogeneous large-scale data (2015)
  12. Arias-Castro, Ery; Lounici, Karim: Estimation and variable selection with exponential weights (2014)
  13. Champion, Magali; Cierco-Ayrolles, Christine; Gadat, Sébastien; Vignes, Matthieu: Sparse regression and support recovery with (\mathbbL_2)-boosting algorithms (2014)
  14. Roberts, S.; Nowak, G.: Stabilizing the lasso against cross-validation variability (2014)
  15. Trendafilov, Nickolay T.: From simple structure to sparse components: a review (2014)
  16. Yamada, Makoto; Jitkrittum, Wittawat; Sigal, Leonid; Xing, Eric P.; Sugiyama, Masashi: High-dimensional feature selection by feature-wise kernelized Lasso (2014)
  17. Chen, Jun; Li, Hongzhe: Variable selection for sparse Dirichlet-multinomial regression with an application to microbiome data analysis (2013)
  18. Li, Shuang; Hsu, Li; Peng, Jie; Wang, Pei: Bootstrap inference for network construction with an application to a breast cancer microarray study (2013)
  19. Liu, Hanzhong; Yu, Bin: Asymptotic properties of Lasso+mLS and Lasso+Ridge in sparse high-dimensional linear regression (2013)
  20. Yu, Bin: Stability (2013)

1 2 next