LSRN
LSRN: A parallel iterative solver for strongly over- or underdetermined systems. We describe a parallel iterative least squares solver named LSRN that is based on random normal projection. LSRN computes the min-length solution to min x∈ℝ n ∥Ax-b∥ 2 , where A∈ℝ m×n with m≫n or m≪n, and where A may be rank-deficient. Tikhonov regularization may also be included. Since A is involved only in matrix-matrix and matrix-vector multiplications, it can be a dense or sparse matrix or a linear operator, and LSRN automatically speeds up when A is sparse or a fast linear operator. The preconditioning phase consists of a random normal projection, which is embarrassingly parallel, and a singular value decomposition of size ⌈γmin(m,n)⌉×min(m,n), where γ is moderately larger than 1, e.g., γ=2. We prove that the preconditioned system is well-conditioned, with a strong concentration result on the extreme singular values, and hence that the number of iterations is fully predictable when we apply LSQR or the Chebyshev semi-iterative method. As we demonstrate, the Chebyshev method is particularly efficient for solving large problems on clusters with high communication cost. Numerical results show that on a shared-memory machine, LSRN is very competitive with LAPACK’s DGELSD and a fast randomized least squares solver called Blendenpik on large dense problems, and it outperforms the least squares solver from SuiteSparseQR on sparse problems without sparsity patterns that can be exploited to reduce fill-in. Further experiments show that LSRN scales well on an Amazon Elastic Compute Cloud cluster.
Keywords for this software
References in zbMATH (referenced in 15 articles )
Showing results 1 to 15 of 15.
Sorted by year (- Zhou, Quan; Guan, Yongtao: Fast model-fitting of Bayesian variable selection regression using the iterative complex factorization algorithm (2019)
- Jia, Zhongxiao; Yang, Yanfei: Modified truncated randomized singular value decomposition (MTRSVD) algorithms for large scale discrete ill-posed problems with general-form regularization (2018)
- Yang, Jiyan; Chow, Yin-Lam; Ré, Christopher; Mahoney, Michael W.: Weighted SGD for (\ell_p) regression with randomized preconditioning (2018)
- Avron, Haim; Clarkson, Kenneth L.; Woodruff, David P.: Faster kernel ridge regression using sketching and preconditioning (2017)
- Spantini, Alessio; Cui, Tiangang; Willcox, Karen; Tenorio, Luis; Marzouk, Youssef: Goal-oriented optimal approximations of Bayesian linear inverse problems (2017)
- Pereyra, Victor: Model order reduction with oblique projections for large scale wave propagation (2016)
- Scott, Jennifer; Tuma, Miroslav: Preconditioning of linear least squares by robust incomplete factorization for implicitly held normal equations (2016)
- Wei, Yimin; Xie, Pengpeng; Zhang, Liping: Tikhonov regularization and randomized GSVD (2016)
- Zhang, Xiaowei; Cheng, Li; Chu, Delin; Liao, Li-Zhi; Ng, Michael K.; Tan, Roger C. E.: Incremental regularized least squares for dimensionality reduction of large-scale data (2016)
- Ma, Ping; Mahoney, Michael W.; Yu, Bin: A statistical perspective on algorithmic leveraging (2015)
- Spantini, Alessio; Solonen, Antti; Cui, Tiangang; Martin, James; Tenorio, Luis; Marzouk, Youssef: Optimal low-rank approximations of Bayesian linear inverse problems (2015)
- Meng, Xiangrui; Saunders, Michael A.; Mahoney, Michael W.: LSRN: A parallel iterative solver for strongly over- or underdetermined systems (2014)
- Meng, Xiangrui; Mahoney, Michael W.: Low-distortion subspace embeddings in input-sparsity time and applications to robust linear regression (2013)
- Drineas, Petros; Magdon-Ismail, Malik; Mahoney, Michael W.; Woodruff, David P.: Fast approximation of matrix coherence and statistical leverage (2012)
- Mahoney, Michael W.: Randomized algorithms for matrices and data (2011)