Basic Linear Algebra Subprograms
From Wikipedia, the free encyclopedia
Basic Linear Algebra Subprograms (BLAS) is a de facto application programming interface standard for publishing libraries to perform basic linear algebra operations such as vector and matrix multiplication. They were first published in 1979, and are used to build larger packages such as LAPACK. Heavily used in high-performance computing, highly optimized implementations of the BLAS interface have been developed by hardware vendors such as by Intel as well as by other authors (e.g. ATLAS is a portable self-optimizing BLAS). The LINPACK benchmark relies heavily on DGEMM, a BLAS subroutine, for its performance.
Contents |
[edit] Functionality
The BLAS functionality is divided into three levels: 1, 2 and 3.
[edit] Level 1
This level contains vector operations of the form
as well as scalar dot products and vector norms, among other things.
[edit] Level 2
This level contains matrix-vector operations of the form
as well as solving for x with T being triangular, among other things.
[edit] Level 3
This level contains matrix-matrix operations of the form
as well as solving for triangular matrices T, among other things. This level contains the widely used General Matrix Multiply operation.
[edit] Implementations
- refblas
- The official reference implementation from netlib. C and Fortran 77 versions are available.[1]
- Accelerate
- Apple Computer's framework for Mac OS X, which includes tuned versions of BLAS and LAPACK for both PowerPC and Intel Core processors.[2]
- ACML
- The AMD Core Math Library, supporting the AMD Athlon and Opteron CPUs under Linux and Windows.[3]
- ATLAS
- Automatically Tuned Linear Algebra Software, an open source implementation of BLAS APIs for C and Fortran 77.[4]
- CUDA SDK
- The NVIDIA CUDA SDK includes BLAS functionality for writing C programs that runs on GeForce 8 Series graphics cards.
- ESSL
- IBM's Engineering and Scientific Subroutine Library, supporting the PowerPC architecture under AIX and Linux.[5]
- libflame
- FLAME project implementation of dense linear algebra library, including BLAS. [6]
- Goto BLAS
- Kazushige Goto's implementation of BLAS.[7]
- HP MLIB
- HP's Math library, supporting IA-64, PA-RISC, x86 and Opteron architecture under HPUX and Linux.
- Intel MKL
- The Intel Math Kernel Library, supporting the Intel Pentium and Itanium CPUs under Linux, Windows and Mac OS X.[8]
- MathKeisan
- NEC's math library, supporting NEC SX architecture under SUPER-UX, and Itanium under Linux. [9]
- PDLIB/SX
- NEC's Public Domain Mathematical Library for the NEC SX-4 system.[10]
- SCSL
- SGI's Scientific Computing Software Library contains BLAS and LAPACK implementations for SGI's Irix workstations.[11]
- Sun Performance Linaray
- Optimized BLAS and LAPACK for SPARC and AMD64 architectures under Solaris 8, 9, and 10.[12]
- uBLAS
- A generic C++ template class library providing BLAS functionality. Part of the Boost library. Note that, unlike other implementations, uBLAS focuses on correctness of the algorithms using advanced C++ features, rather than high performance. [13]
- GSL
- The GNU Scientific Library Contains a multi-platform implementation in C which is distributed under the GNU General Public License.
[edit] The Sparse BLAS
Sparse extensions to the previously dense BLAS exist such as in ACML
[edit] See also
- Numerical linear algebra, the type of problem BLAS solves
- LAPACK, the Linear Algebra Package
[edit] External links
- BLAS homepage on Netlib.org
- BLAS FAQ
- BLAS operations from the GNU Scientific Library reference manual
- BLAS Quick Reference Guide from LAPACK Users' Guide
- Lawson Oral History One of the original authors of the BLAS discusses its creation in an oral history interview. Charles L. Lawson Oral history interview by Thomas Haigh, 6 and 7 November, 2004, San Clemente, California. Society for Industrial and Applied Mathematics, Philadelphia, PA.
- Dongarra Oral History In an oral history interview, Jack Dongarra explores the early relationship of BLAS to LINPACK, the creation of higher level BLAS versions for new architectures, and his later work on the ATLAS system to automatically optimize BLAS for particular machines. Jack Dongarra, Oral history interview by Thomas Haigh, 26 April, 2005, University of Tennessee, Knoxville TN. Society for Industrial and Applied Mathematics, Philadelphia, PA
- An Overview of the Sparse Basic Linear Algebra Subprograms: The New Standard from the BLAS Technical Forum [14]