Algorithms For Efficient Matrix Multiplication
What is the fastest algorithm for matrix multiplication? |
The running time of square matrix multiplication, if carried out naïvely, is . The running time for multiplying rectangular matrices (one m×p-matrix with one p×n-matrix) is O(mnp), however, more efficient algorithms exist, such as Strassen's algorithm, devised by Volker Strassen in 1969 and often referred to as "fast matrix multiplication". It is based on a way of multiplying two 2×2-matrices which requires only 7 multiplications (instead of the usual 8), at the expense of several additional addition and subtraction operations. Applying this recursively gives an algorithm with a multiplicative cost of . Strassen's algorithm is more complex, and the numerical stability is reduced compared to naïve algorithm. Nevertheless, it appears in several libraries, such as BLAS, where it is significantly more efficient for matrices with dimensions n > 100, and is very useful for large matrices over exact domains such as finite fields, where numerical stability is not an issue.
The current algorithm with the lowest known exponent k is a generalization of the Coppersmith–Winograd algorithm that has an asymptotic complexity of O(n2.3727) thanks to Vassilevska Williams. This algorithm, and the Coppersmith-Winograd algorithm on which it is based, are similar to Strassen's algorithm: a way is devised for multiplying two k×k-matrices with fewer than k3 multiplications, and this technique is applied recursively. However, the constant coefficient hidden by the Big O notation is so large that these algorithms are only worthwhile for matrices that are too large to handle on present-day computers.
Since any algorithm for multiplying two n×n-matrices has to process all 2×n2-entries, there is an asymptotic lower bound of operations. Raz (2002) proves a lower bound of for bounded coefficient arithmetic circuits over the real or complex numbers.
Cohn et al. (2003, 2005) put methods such as the Strassen and Coppersmith–Winograd algorithms in an entirely different group-theoretic context, by utilising triples of subsets of finite groups which satisfy a disjointness property called the triple product property (TPP). They show that if families of wreath products of Abelian groups with symmetric groups realise families of subset triples with a simultaneous version of the TPP, then there are matrix multiplication algorithms with essentially quadratic complexity. Most researchers believe that this is indeed the case However, Alon, Shpilka and Umans have recently shown that some of these conjectures implying fast matrix multiplication are incompatible with another plausible conjecture, the sunflower conjecture.
Because of the nature of matrix operations and the layout of matrices in memory, it is typically possible to gain substantial performance gains through use of parallelization and vectorization. It should therefore be noted that some lower time-complexity algorithms on paper may have indirect time complexity costs on real machines.
Freivalds' algorithm is a simple Monte Carlo algorithm that given matrices verifies in time if .
Read more about this topic: Matrix Multiplication
Famous quotes containing the words efficient and/or matrix:
“The truly efficient laborer will not crowd his day with work, but will saunter to his task, surrounded by a wide halo of ease and leisure, and then do but what he loves best. He is anxious only about the fruitful kernels of time.”
—Henry David Thoreau (18171862)
“In all cultures, the family imprints its members with selfhood. Human experience of identity has two elements; a sense of belonging and a sense of being separate. The laboratory in which these ingredients are mixed and dispensed is the family, the matrix of identity.”
—Salvador Minuchin (20th century)