R package SparseNet: coordinate descent with nonconvex penalties. We address the problem of sparse selection in linear models. A number of nonconvex penalties have been proposed in the literature for this purpose, along with a variety of convex-relaxation algorithms for finding good solutions. We pursue a coordinate-descent approach for optimization, and study its convergence properties. We characterize the properties of penalties suitable for this approach, study their corresponding threshold functions, and describe a df-standardizing reparametrization that assists our pathwise algorithm. The MC+ penalty is ideally suited to this task, and we use it to demonstrate the performance of our algorithm. Certain technical derivations and experiments related to this article are included in the supplementary materials section.

References in zbMATH (referenced in 71 articles , 1 standard article )

Showing results 21 to 40 of 71.
Sorted by year (citations)
  1. Ročková, Veronika; George, Edward I.: The spike-and-slab LASSO (2018)
  2. Shi, Yue-Yong; Cao, Yong-Xiu; Yu, Ji-Chang; Jiao, Yu-Ling: Variable selection via generalized SELO-penalized linear regression models (2018)
  3. Shi, Yue Yong; Jiao, Yu Ling; Cao, Yong Xiu; Liu, Yan Yan: An alternating direction method of multipliers for MCP-penalized regression with high-dimensional data (2018)
  4. Shi, Yueyong; Wu, Yuanshan; Xu, Deyi; Jiao, Yuling: An ADMM with continuation algorithm for non-convex SICA-penalized regression in high dimensions (2018)
  5. Tansey, Wesley; Koyejo, Oluwasanmi; Poldrack, Russell A.; Scott, James G.: False discovery rate smoothing (2018)
  6. Wu, C. F. Jeff: A fresh look at effect aliasing and interactions: some new wine in old bottles (2018)
  7. Zhang, Shuai; Xin, Jack: Minimization of transformed (L_1) penalty: theory, difference of convex function algorithm, and robust application in compressed sensing (2018)
  8. Zhao, Tuo; Liu, Han; Zhang, Tong: Pathwise coordinate optimization for sparse learning: algorithm and theory (2018)
  9. Ahn, Miju; Pang, Jong-Shi; Xin, Jack: Difference-of-convex learning: directional stationarity, optimality, and sparsity (2017)
  10. Bryon Aragam, Jiaying Gu, Qing Zhou: Learning Large-Scale Bayesian Networks with the sparsebn Package (2017) arXiv
  11. Giuzio, Margherita: Genetic algorithm versus classical methods in sparse index tracking (2017)
  12. Huang, Po-Hsien; Chen, Hung; Weng, Li-Jen: A penalized likelihood method for structural equation modeling (2017)
  13. Mkhadri, Abdallah; Ouhourane, Mohamed; Oualkacha, Karim: A coordinate descent algorithm for computing penalized smooth quantile regression (2017)
  14. Simon Mak, C. F. Jeff Wu: cmenet: a new method for bi-level variable selection of conditional main effects (2017) arXiv
  15. Suzumura, Shinya; Ogawa, Kohei; Sugiyama, Masashi; Karasuyama, Masayuki; Takeuchi, Ichiro: Homotopy continuation approaches for robust SV classification and regression (2017)
  16. Yamamoto, Michio; Hwang, Heungsun: Dimension-reduced clustering of functional data via subspace separation (2017)
  17. Zeng, Jinshan; Peng, Zhimin; Lin, Shaobo: GAITA: a Gauss-Seidel iterative thresholding algorithm for (\ell_q) regularized least squares regression (2017)
  18. Bertsimas, Dimitris; King, Angela: OR forum: An algorithmic approach to linear regression (2016)
  19. Bertsimas, Dimitris; King, Angela; Mazumder, Rahul: Best subset selection via a modern optimization lens (2016)
  20. Chen, Ting-Huei; Sun, Wei; Fine, Jason P.: Designing penalty functions in high dimensional problems: the role of tuning parameters (2016)