SparseLOGREG

A Simple and Efficient Algorithm for Gene Selection using Sparse Logistic Regression. Motivation: This paper gives a new and efficient algorithm for the sparse logistic regression problem. The proposed algorithm is based on the Gauss–Seidel method and is asymptotically convergent. It is simple and extremely easy to implement; it neither uses any sophisticated mathematical programming software nor needs any matrix operations. It can be applied to a variety of real-world problems like identifying marker genes and building a classifier in the context of cancer diagnosis using microarray data. Results: The gene selection method suggested in this paper is demonstrated on two real-world data sets and the results were found to be consistent with the literature.


References in zbMATH (referenced in 25 articles )

Showing results 1 to 20 of 25.
Sorted by year (citations)

1 2 next

  1. Liu, Xiaoman; Liu, Jijun: Image restoration from noisy incomplete frequency data by alternative iteration scheme (2020)
  2. Algamal, Zakariya Yahya; Lee, Muhammad Hisyam: A two-stage sparse logistic regression for optimal gene selection in high-dimensional microarray data classification (2019)
  3. Zhou, Shengbin; Zhou, Jingke; Zhang, Bo: High-dimensional generalized linear models incorporating graphical structure among predictors (2019)
  4. Gotoh, Jun-ya; Takeda, Akiko; Tono, Katsuya: DC formulations and algorithms for sparse optimization problems (2018)
  5. Xu, Jiucheng; Mu, Huiyu; Wang, Yun; Huang, Fangzhou: Feature genes selection using supervised locally linear embedding and correlation coefficient for microarray classification (2018)
  6. Yang, Wenyuan; Li, Chan; Zhao, Hong: Label distribution learning by regularized sample self-representation (2018)
  7. Qiao, Maoying; Liu, Liu; Yu, Jun; Xu, Chang; Tao, Dacheng: Diversified dictionaries for multi-instance learning (2017)
  8. Dong, Qian; Liu, Xin; Wen, Zai-Wen; Yuan, Ya-Xiang: A parallel line search subspace correction method for composite convex optimization (2015)
  9. Wang, Jie; Wonka, Peter; Ye, Jieping: Lasso screening rules via dual polytope projection (2015)
  10. Xu, Yangyang; Yin, Wotao: Block stochastic gradient iteration for convex and nonconvex optimization (2015)
  11. Groll, Andreas; Tutz, Gerhard: Variable selection for generalized linear mixed models by (L_1)-penalized estimation (2014)
  12. Peng, Hong-Yi; Jiang, Chun-Fu; Fang, Xiang; Liu, Jin-Shan: Variable selection for Fisher linear discriminant analysis using the modified sequential backward selection algorithm for the microarray data (2014)
  13. Yu, Yi; Feng, Yang: APPLE: approximate path for penalized likelihood estimators (2014)
  14. Blondel, Mathieu; Seki, Kazuhiro; Uehara, Kuniaki: Block coordinate descent algorithms for large-scale sparse multiclass classification (2013)
  15. Korzeń, M.; Jaroszewicz, S.; Klęsk, P.: Logistic regression with weight grouping priors (2013)
  16. Choi, Hosik; Yeo, Donghwa; Kwon, Sunghoon; Kim, Yongdai: Gene selection and prediction for cancer classification using support vector machines with a reject option (2011)
  17. Yger, F.; Rakotomamonjy, A.: Wavelet kernel learning (2011) ioport
  18. Goeman, Jelle J.: (L_1) penalized estimation in the Cox proportional hazards model (2010)
  19. Jerome Friedman; Trevor Hastie; Rob Tibshirani: Regularization Paths for Generalized Linear Models via Coordinate Descent (2010) not zbMATH
  20. Leng, Chenlei; Li, Bo: Least squares approximation with a diverging number of parameters (2010)

1 2 next