Implementation of Matrix Transposition On Computers
On a computer, one can often avoid explicitly transposing a matrix in memory by simply accessing the same data in a different order. For example, software libraries for linear algebra, such as BLAS, typically provide options to specify that certain matrices are to be interpreted in transposed order to avoid the necessity of data movement.
However, there remain a number of circumstances in which it is necessary or desirable to physically reorder a matrix in memory to its transposed ordering. For example, with a matrix stored in row-major order, the rows of the matrix are contiguous in memory and the columns are discontiguous. If repeated operations need to be performed on the columns, for example in a fast Fourier transform algorithm, transposing the matrix in memory (to make the columns contiguous) may improve performance by increasing memory locality.
Ideally, one might hope to transpose a matrix with minimal additional storage. This leads to the problem of transposing an n × m matrix in-place, with O(1) additional storage or at most storage much less than mn. For n ≠ m, this involves a complicated permutation of the data elements that is non-trivial to implement in-place. Therefore efficient in-place matrix transposition has been the subject of numerous research publications in computer science, starting in the late 1950s, and several algorithms have been developed.
Read more about this topic: Transpose
Famous quotes containing the word matrix:
“The matrix is God?
In a manner of speaking, although it would be more accurate ... to say that the matrix has a God, since this beings omniscience and omnipotence are assumed to be limited to the matrix.
If it has limits, it isnt omnipotent.
Exactly.... Cyberspace exists, insofar as it can be said to exist, by virtue of human agency.”
—William Gibson (b. 1948)