This paper describes the Automatically Tuned Linear Algebra Software (ATLAS) project, as well as the fundamental principles that underly it. ATLAS is an instantiation of a new paradigm in high performance library production and maintenance, which we term automated empirical optimization of software; this style of library management has been created in order to allow software to keep pace with the incredible rate of hardware advancement inherent in Moore’s Law. ATLAS is the application of this new paradigm to linear algebra software, with the present emphasis on the basic linear algebra subprograms, a widely used, performance-critical, linear algebra kernel library

This software is also referenced in ORMS.

References in zbMATH (referenced in 197 articles , 1 standard article )

Showing results 161 to 180 of 197.
Sorted by year (citations)

previous 1 2 3 ... 7 8 9 10 next

  1. Leuschel, Michael; Bruynooghe, Maurice: Logic program specialisation through partial deduction: Control issues (2002)
  2. Liniker, Peter; Beckmann, Olav; Kelly, Paul H. J.: Delayed evaluation, self-optimising software components as a programming model (2002)
  3. Li, Xiaoye S.; Martin, Michael C.; Thompson, Brandon J.; Tung, Teresa; Yoo, Daniel J.; Demmel, James W.; Bailey, David H.; Henry, Greg; Hida, Yozo; Iskandar, Jimmy; Kahan, William; Kang, Suh Y.; Kapur, Anil: Design, implementation and testing of extended and mixed precision BLAS (2002)
  4. Moore, Keith; Dongarra, Jack: NetBuild: transparent cross-platform access to computational software libraries (2002)
  5. Stathopoulos, Andreas; Wu, Kesheng: A block orthogonalization procedure with constant synchronization requirements (2002)
  6. Stpiczyński, Przemysław: A new message passing algorithm for solving linear recurrence systems (2002)
  7. Valsalam, Vinod; Skjellum, Anthony: A framework for high-performance matrix multiplication based on hierarchical abstractions, algorithms and optimized low-level kernels (2002)
  8. Aberdeen, Douglas; Baxter, Jonathan: Emmerald: a fast matrix-matrix multiply using Intel’s SSE instructions (2001)
  9. Andersen, Bjarne Stig; Waśniewski, Jerzy; Gustavson, Fred G.: A recursive formulation of Cholesky factorization of a matrix in packed storage (2001)
  10. Bilardi, Gianfranco; D’Alberto, Paolo; Nicolau, Alex: Fractal matrix multiplication: A case study on portability of cache performance (2001)
  11. Bilardi, Gianfranco; Peserico, Enoch: A characterization of temporal locality and its portability across memory hierarchies (2001)
  12. Bowers, K. J.: Accelerating a particle-in-cell simulation using a hybrid counting sort (2001)
  13. Choi, Jaeyoung: PoLAPACK: Parallel factorization routines with algorithmic blocking (2001)
  14. Clint Whaley, R.; Petitet, A.; Dongarra, J. J.: Automated empirical optimizations of software and the ATLAS project (2001)
  15. Cociorva, D.; Wilkins, J.; Baumgartner, G.; Sadayappan, P.; Ramanujam, J.; Nooijen, M.; Bernholdt, D.; Harrison, R.: Towards automatic synthesis of high-performance codes for electronic structure calculations: Data locality optimization (2001)
  16. Geus, R.; Röllin, S.: Towards a fast parallel sparse symmetric matrix-vector multiplication (2001)
  17. Gropp, William D.: Learning from the success of MPI (2001)
  18. Gunnels, John A.; Gustavson, Fred G.; Henry, Greg M.; van de Geijn, Robert A.: FLAME: formal linear algebra methods environment (2001)
  19. Gunnels, John A.; Henry, Greg M.; van de Geijn, Robert A.: A family of high-performance matrix multiplication algorithms (2001)
  20. Guyer, Samuel Z.; Lin, Calvin: Optimizing the use of high performance software libraries (2001)

previous 1 2 3 ... 7 8 9 10 next