LAPACK is written in Fortran 90 and provides routines for solving systems of simultaneous linear equations, least-squares solutions of linear systems of equations, eigenvalue problems, and singular value problems. The associated matrix factorizations (LU, Cholesky, QR, SVD, Schur, generalized Schur) are also provided, as are related computations such as reordering of the Schur factorizations and estimating condition numbers. Dense and banded matrices are handled, but not general sparse matrices. In all areas, similar functionality is provided for real and complex matrices, in both single and double precision. The original goal of the LAPACK project was to make the widely used EISPACK and LINPACK libraries run efficiently on shared-memory vector and parallel processors. On these machines, LINPACK and EISPACK are inefficient because their memory access patterns disregard the multi-layered memory hierarchies of the machines, thereby spending too much time moving data instead of doing useful floating-point operations. LAPACK addresses this problem by reorganizing the algorithms to use block matrix operations, such as matrix multiplication, in the innermost loops. These block operations can be optimized for each architecture to account for the memory hierarchy, and so provide a transportable way to achieve high efficiency on diverse modern machines. We use the term ”transportable” instead of ”portable” because, for fastest possible performance, LAPACK requires that highly optimized block matrix operations be already implemented on each machine. LAPACK routines are written so that as much as possible of the computation is performed by calls to the Basic Linear Algebra Subprograms (BLAS). LAPACK is designed at the outset to exploit the Level 3 BLAS — a set of specifications for Fortran subprograms that do various types of matrix multiplication and the solution of triangular systems with multiple right-hand sides. Because of the coarse granularity of the Level 3 BLAS operations, their use promotes high efficiency on many high-performance computers, particularly if specially coded implementations are provided by the manufacturer. Highly efficient machine-specific implementations of the BLAS are available for many modern high-performance computers. For details of known vendor- or ISV-provided BLAS, consult the BLAS FAQ. Alternatively, the user can download ATLAS to automatically generate an optimized BLAS library for the architecture. A Fortran 77 reference implementation of the BLAS is available from netlib; however, its use is discouraged as it will not perform as well as a specifically tuned implementation.
This software is also referenced in ORMS.
This software is also referenced in ORMS.
Keywords for this software
References in zbMATH (referenced in 1647 articles , 4 standard articles )
Showing results 1 to 20 of 1647.
- Arndt, Daniel; Bangerth, Wolfgang; Davydov, Denis; Heister, Timo; Heltai, Luca; Kronbichler, Martin; Maier, Matthias; Pelteret, Jean-Paul; Turcksin, Bruno; Wells, David: The \textscdeal.II finite element library: design, features, and insights (2021)
- Bocharov, G. A.; Nechepurenko, Yu. M.; Khristichenko, M. Yu.; Grebennikov, D. S.: Optimal perturbations of systems with delayed independent variables for control of dynamics of infectious diseases based on multicomponent actions (2021)
- Hirshikesh; Pramod, A. L. N.; Ooi, Ean Tat; Song, Chongmin; Natarajan, Sundararajan: An adaptive scaled boundary finite element method for contact analysis (2021)
- Jason Rumengan, Terry Yue Zhuo, Conrad Sanderson: PyArmadillo: a streamlined linear algebra library for Python (2021) arXiv
- Kahl, Karsten; Lang, Bruno: Hypergraph edge elimination -- a symbolic phase for Hermitian eigensolvers based on rank-1 modifications (2021)
- Marshall, Joshua P.; Richardson, J. D.: A three-dimensional, (p)-version BEM: high-order refinement leveraged through regularization (2021)
- Nohra, Carlos J.; Raghunathan, Arvind U.; Sahinidis, Nikolaos: Spectral relaxations and branching strategies for global optimization of mixed-integer quadratic programs (2021)
- Petkov, Petko H.: Componentwise perturbation analysis of the Schur decomposition of a matrix (2021)
- Almeida Guimarães, Dilson; Salles da Cunha, Alexandre; Pereira, Dilson Lucas: Semidefinite programming lower bounds and branch-and-bound algorithms for the quadratic minimum spanning tree problem (2020)
- Andrew Finley, Abhirup Datta, Sudipto Banerjee: R package for Nearest Neighbor Gaussian Process models (2020) arXiv
- Arndt, Daniel; Bangerth, Wolfgang; Blais, Bruno; Clevenger, Thomas C.; Fehling, Marc; Grayver, Alexander V.; Heister, Timo; Heltai, Luca; Kronbichler, Martin; Maier, Matthias; Munch, Peter; Pelteret, Jean-Paul; Rastak, Reza; Tomas, Ignacio; Turcksin, Bruno; Wang, Zhuoran; Wells, David: The deal.II library, version 9.2 (2020)
- Barrera, Javiera; Moreno, Eduardo; Varas K., Sebastián: A decomposition algorithm for computing income taxes with pass-through entities and its application to the Chilean case (2020)
- Ben Hermans, Andreas Themelis, Panagiotis Patrinos: QPALM: A Proximal Augmented Lagrangian Method for Nonconvex Quadratic Programs (2020) arXiv
- Bollhöfer, Matthias; Schenk, Olaf; Janalik, Radim; Hamm, Steve; Gullapalli, Kiran: State-of-the-art sparse direct solvers (2020)
- Brás, C. P.; Martínez, J. M.; Raydan, M.: Large-scale unconstrained optimization using separable cubic modeling and matrix-free subspace minimization (2020)
- Cambier, Léopold; Chen, Chao; Boman, Erik G.; Rajamanickam, Sivasankaran; Tuminaro, Raymond S.; Darve, Eric: An algebraic sparsified nested dissection algorithm using low-rank approximations (2020)
- Chang, Xiao-Wen; Titley-Peloquin, David: An improved algorithm for generalized least squares estimation (2020)
- Cinal, M.: Highly accurate numerical solution of Hartree-Fock equation with pseudospectral method for closed-shell atoms (2020)
- Cortinovis, Alice; Kressner, Daniel: Low-rank approximation in the Frobenius norm by column and row subset selection (2020)
- De Luca, Pasquale; Galletti, Ardelio; Giunta, Giulio; Marcellino, Livia; Raei, Marzie: Performance analysis of a multicore implementation for solving a two-dimensional inverse anomalous diffusion problem (2020)