Mixed-precision Cholesky QR factorization and its case studies on multicore CPU with multiple gpus. To orthonormalize the columns of a dense matrix, the Cholesky QR (CholQR) requires only one global reduction between the parallel processing units and performs most of its computation using BLAS-3 kernels. As a result, compared to other orthogonalization algorithms, CholQR obtains superior performance on many of the current computer architectures, where the communication is becoming increasingly expensive compared to the arithmetic operations. This is especially true when the input matrix is tall-skinny. Unfortunately, the orthogonality error of CholQR depends quadratically on the condition number of the input matrix, and it is numerically unstable when the matrix is ill-conditioned. To enhance the stability of CholQR, we recently used mixed-precision arithmetic; the input and output matrices are in the working precision, but some of its intermediate results are accumulated in the doubled precision. In this paper, we analyze the numerical properties of this mixed-precision CholQR. Our analysis shows that by selectively using the doubled precision, the orthogonality error of the mixed-precision CholQR only depends linearly on the condition number of the input matrix. We provide numerical results to demonstrate the improved numerical stability of the mixed-precision CholQR in practice. We then study its performance. When the target hardware does not support the desired higher precision, software emulation is needed. For example, using software-emulated double-double precision for the working 64-bit double precision, the mixed-precision CholQR requires about $8.5 imes$ more floating-point instructions than that required by the standard CholQR. On the other hand, the increase in the communication cost using the double-double precision is less significant, and our performance results on multicore CPU with a different graphics processing unit (GPU) demonstrate that the overhead of using the double-double arithmetic is decreasing on a newer architecture, where the computation is becoming less expensive compared to the communication. As a result, with a latest NVIDIA GPU, the mixed-precision CholQR was only $1.4 imes$ slower than the standard CholQR. Finally, we present case studies of using the mixed-precision CholQR within communication-avoiding variants of Krylov subspace projection methods for solving a nonsymmetric linear system of equations and for solving a symmetric eigenvalue problem, on a multicore CPU with multiple GPUs. These case studies demonstrate that by using the higher precision for this small but critical segment of the Krylov methods, we can improve not only the overall numerical stability of the solvers but also, in some cases, their performance.
Keywords for this software
References in zbMATH (referenced in 10 articles )
Showing results 1 to 10 of 10.
- Higham, Nicholas J.; Pranesh, Srikara: Exploiting lower precision arithmetic in solving symmetric positive definite linear systems and least squares problems (2021)
- Fukaya, Takeshi; Kannan, Ramaseshan; Nakatsukasa, Yuji; Yamamoto, Yusaku; Yanagisawa, Yuka: Shifted Cholesky QR for computing the QR factorization of ill-conditioned matrices (2020)
- Alvermann, Andreas; Basermann, Achim; Bungartz, Hans-Joachim; Carbogno, Christian; Ernst, Dominik; Fehske, Holger; Futamura, Yasunori; Galgon, Martin; Hager, Georg; Huber, Sarah; Huckle, Thomas; Ida, Akihiro; Imakura, Akira; Kawai, Masatoshi; Köcher, Simone; Kreutzer, Moritz; Kus, Pavel; Lang, Bruno; Lederer, Hermann; Manin, Valeriy; Marek, Andreas; Nakajima, Kengo; Nemec, Lydia; Reuter, Karsten; Rippl, Michael; Röhrig-Zöllner, Melven; Sakurai, Tetsuya; Scheffler, Matthias; Scheurer, Christoph; Shahzad, Faisal; Simoes Brambila, Danilo; Thies, Jonas; Wellein, Gerhard: Benefits from using mixed precision computations in the ELPA-AEO and ESSEX-II eigensolver projects (2019)
- Barlow, Jesse L.: Block modified Gram-Schmidt algorithms and their analysis (2019)
- Boukaram, Wajih; Turkiyyah, George; Keyes, David: Randomized GPU algorithms for the construction of hierarchical matrices from matrix-vector operations (2019)
- Grigori, Laura; Tissot, Olivier: Scalable linear solvers based on enlarged Krylov subspaces with dynamic reduction of search directions (2019)
- Tomás, Andrés E.; Quintana-Ortí, Enrique S.: Cholesky and Gram-Schmidt orthogonalization for tall-and-skinny QR factorizations on graphics processors (2019)
- Li, Huamin; Kluger, Yuval; Tygert, Mark: Randomized algorithms for distributed computation of principal component analysis and singular value decomposition (2018)
- Ralha, Rui: Mixed precision bisection (2018)
- Yamazaki, Ichitaro; Tomov, Stanimire; Dongarra, Jack: Mixed-precision Cholesky QR factorization and its case studies on multicore CPU with multiple GPUs (2015)