Matrix multiplication efficiency
Mia Lopez
I have a multiplication $A * B * A^T$, where $B$ is a covariance matrix (i.e. symmetric, square) and $A$ is an arbitrary matrix with compatible dimensions. Given the symmetry of $B$ and the fact that I multiply by $A$ and $A^T$, it really feels like there must be a more efficient way to compute the result than two multiplications, but I haven't been able to find anything. Does anyone have any insight on the problem, or why there isn't any better way of doing it?
Thanks!
$\endgroup$2 Answers
$\begingroup$Since $B$ is symmetric, $ABA^T$ is also symmetric, and so you only need to determine the entries above the diagonal. Thus, instead of doing two matrix multiplications, you can make do with one and a half.
However, without there being anything else special about $A$ or $B$, I am skeptical that any speedup you get will be more than a constant factor. In particular, it is very unlikely that computing $ABA^T$ can be done faster than computing $AB$. Algorithmically, this means that your won't change the order of magnitude of the run time of whatever you are using this for, and so unless you are doing a ton of matrix multiplications, looking for a clever optimization won't likely net you great gains.
However, on a related note, something that will speed up the multiplication, at least when you are dealing with large matrices, is Strassen's algorithm, which is an algorithm for multiplying $2\times 2$ matrices with 7 multiplications instead of 8, which can recursively be used to multiply large matrices together quickly ($O(n^{\log_2 7+o(1)})$ instead of $O(n^3)$).
$\endgroup$ 3 $\begingroup$If you're going to be using the same $A$ for many different $B$, it might pay to precompute the matrices $C_{ii} = A E_{ii} A^T$ and $C_{ij} = A (E_{ij} + E_{ji}) A^T$ for all $i \le j$, where $E_{ij}$ is the matrix with $(i,j)$ entry $1$ and all others $0$. Then $A B A^T = \sum_i \sum_{j \ge i} b_{ij} C_{ij}$.
$\endgroup$