sparsenet

R package SparseNet: coordinate descent with nonconvex penalties. We address the problem of sparse selection in linear models. A number of nonconvex penalties have been proposed in the literature for this purpose, along with a variety of convex-relaxation algorithms for finding good solutions. We pursue a coordinate-descent approach for optimization, and study its convergence properties. We characterize the properties of penalties suitable for this approach, study their corresponding threshold functions, and describe a df-standardizing reparametrization that assists our pathwise algorithm. The MC+ penalty is ideally suited to this task, and we use it to demonstrate the performance of our algorithm. Certain technical derivations and experiments related to this article are included in the supplementary materials section.


References in zbMATH (referenced in 80 articles , 1 standard article )

Showing results 1 to 20 of 80.
Sorted by year (citations)

1 2 3 4 next

  1. Sun, Ruoyu; Ye, Yinyu: Worst-case complexity of cyclic coordinate descent: (O(n^2)) gap with randomized version (2021)
  2. Buccini, Alessandro; De la Cruz Cabrera, Omar; Donatelli, Marco; Martinelli, Andrea; Reichel, Lothar: Large-scale regression with non-convex loss and penalty (2020)
  3. Carlsson, Marcus; Gerosa, Daniele; Olsson, Carl: An unbiased approach to compressed sensing (2020)
  4. Hastie, Trevor; Tibshirani, Robert; Tibshirani, Ryan: Best subset, forward stepwise or Lasso? Analysis and recommendations based on extensive comparisons (2020)
  5. Hazimeh, Hussein; Mazumder, Rahul: Fast best subset selection: coordinate descent and local combinatorial optimization algorithms (2020)
  6. Mazumder, Rahul; Saldana, Diego; Weng, Haolei: Matrix completion with nonconvex regularization: spectral operators and scalable algorithms (2020)
  7. Mazumder, Rahul; Weng, Haolei: Computing the degrees of freedom of rank-regularized estimators and cousins (2020)
  8. Sarwar, Owais; Sauk, Benjamin; Sahinidis, Nikolaos V.: A discussion on practical considerations with sparse regression methodologies (2020)
  9. Takada, Masaaki; Suzuki, Taiji; Fujisawa, Hironori: Independently interpretable Lasso for generalized linear models (2020)
  10. Xie, Dongxiu; Woerdeman, Hugo J.; Xu, An-Bao: Parametrized quasi-soft thresholding operator for compressed sensing and matrix completion (2020)
  11. Xu, Xinyi; Li, Xiangjie; Zhang, Jingxiao: Regularization methods for high-dimensional sparse control function models (2020)
  12. Yu, Guan; Yin, Liang; Lu, Shu; Liu, Yufeng: Confidence intervals for sparse penalized regression with random designs (2020)
  13. Bhadra, Anindya; Datta, Jyotishka; Li, Yunfan; Polson, Nicholas G.; Willard, Brandon: Prediction risk for the horseshoe regression (2019)
  14. Bhadra, Anindya; Datta, Jyotishka; Polson, Nicholas G.; Willard, Brandon: Lasso meets horseshoe: a survey (2019)
  15. Georgios Exarchakis, Jörg Bornschein, Abdul-Saboor Sheikh, Zhenwen Dai, Marc Henniges, Jakob Drefs, Jörg Lücke: ProSper - A Python Library for Probabilistic Sparse Coding with Non-Standard Priors and Superpositions (2019) arXiv
  16. Lee, Seokho; Kim, Seonhwa: Marginalized Lasso in sparse regression (2019)
  17. Mak, Simon; Wu, C. F. Jeff: \textsfcmenet: A new method for bi-level variable selection of conditional main effects (2019)
  18. Ma, Rongrong; Miao, Jianyu; Niu, Lingfeng; Zhang, Peng: Transformed (\ell_1) regularization for learning sparse deep neural networks (2019)
  19. Meulman, Jacqueline J.; van der Kooij, Anita J.; Duisters, Kevin L. W.: ROS regression: integrating regularization with optimal scaling regression (2019)
  20. Moran, Gemma E.; Ročková, Veronika; George, Edward I.: Variance prior forms for high-dimensional Bayesian variable selection (2019)

1 2 3 4 next