glasso
The graphical lasso: new insights and alternatives. The graphical lasso [5] is an algorithm for learning the structure in an undirected Gaussian graphical model, using ℓ 1 regularization to control the number of zeros in the precision matrix Θ=Σ -1 [2, 11]. The R package glasso [5] is popular, fast, and allows one to efficiently build a path of models for different values of the tuning parameter. Convergence of glasso can be tricky; the converged precision matrix might not be the inverse of the estimated covariance, and occasionally it fails to converge with warm starts. In this paper we explain this behavior, and propose new algorithms that appear to outperform glasso. By studying the “normal equations” we see that, glasso is solving the dual of the graphical lasso penalized likelihood, by block coordinate ascent; a result which can also be found in [2]. In this dual, the target of estimation is Σ, the covariance matrix, rather than the precision matrix Θ. We propose similar primal algorithms p-glasso and dp-glasso, that also operate by block-coordinate descent, where Θ is the optimization target. We study all of these algorithms, and in particular different approaches to solving their coordinate sub-problems. We conclude that dp-glasso is superior from several points of view.
Keywords for this software
References in zbMATH (referenced in 353 articles , 1 standard article )
Showing results 1 to 20 of 353.
Sorted by year (- Andrade, Daniel; Takeda, Akiko; Fukumizu, Kenji: Robust Bayesian model selection for variable clustering with the Gaussian graphical model (2020)
- An, Ziwen; Nott, David J.; Drovandi, Christopher: Robust Bayesian synthetic likelihood via a semi-parametric approach (2020)
- Boudt, Kris; Rousseeuw, Peter J.; Vanduffel, Steven; Verdonck, Tim: The minimum regularized covariance determinant estimator (2020)
- Chen, Jingnan; Dai, Gengling; Zhang, Ning: An application of sparse-group Lasso regularization to equity portfolio optimization and sector selection (2020)
- Chen, Zehua; Jiang, Yiwei: A two-stage sequential conditional selection approach to sparse high-dimensional multivariate regression models (2020)
- Córdoba, Irene; Bielza, Concha; Larrañaga, Pedro: A review of Gaussian Markov models for conditional independence (2020)
- Fang, Qian; Yu, Chen; Weiping, Zhang: Regularized estimation of precision matrix for high-dimensional multivariate longitudinal data (2020)
- Fan, Jianqing; Feng, Yang; Xia, Lucy: A projection-based conditional dependence measure with applications to high-dimensional undirected graphical models (2020)
- Farnè, Matteo; Montanari, Angela: A large covariance matrix estimator under intermediate spikiness regimes (2020)
- Huang, Yimin; Kong, Xiangshun; Ai, Mingyao: Optimal designs in sparse linear models (2020)
- Jonas M. B. Haslbeck, Lourens J. Waldorp: mgm: Estimating Time-Varying Mixed Graphical Models in High-Dimensional Data (2020) not zbMATH
- Kang, Xiaoning; Deng, Xinwei: An improved modified Cholesky decomposition approach for precision matrix estimation (2020)
- Liu, Jin; Ma, Yingying; Wang, Hansheng: Semiparametric model for covariance regression analysis (2020)
- Nystrup, Peter; Lindström, Erik; Pinson, Pierre; Madsen, Henrik: Temporal hierarchies with autocorrelation for load forecasting (2020)
- Pan, Yuqing; Mai, Qing: Efficient computation for differential network analysis with applications to quadratic discriminant analysis (2020)
- Park, Jun Young; Polzehl, Joerg; Chatterjee, Snigdhansu; Brechmann, André; Fiecas, Mark: Semiparametric modeling of time-varying activation and connectivity in task-based fMRI data (2020)
- Poignard, Benjamin: Asymptotic theory of the adaptive sparse group Lasso (2020)
- Sun, Tianxiao; Necoara, Ion; Tran-Dinh, Quoc: Composite convex optimization with global and local inexact oracles (2020)
- Talukdar, Saurav; Deka, Deepjyoti; Doddi, Harish; Materassi, Donatello; Chertkov, Michael; Salapaka, Murti V.: Physics informed topology learning in networks of linear dynamical systems (2020)
- Wang, Cheng; Jiang, Binyan: An efficient ADMM algorithm for high dimensional precision matrix estimation via penalized quadratic loss (2020)