## Diagonalising an intensity matrix of a reversible Markov chain

Recently I needed to build a Monte Carlo simulator of a continuous-time Markov chain. This is a pretty straightforward exercise; the only catch was that I wanted it to perform well, so I had to use a fast algorithm for matrix exponentiation.

One way to calculate a matrix exponent efficiently is to calculate the eigendecomposition of the matrix: $A=Q\Lambda Q^{-1}$, where $\Lambda$ is a diagonal matrix. Then $\exp(A) = Q\exp(\Lambda)Q^{-1}$ (details here). The process of finding such eigendecomposition is also known as diagonalisation.
Not every matrix can be diagonalised, though. One particularly nice case is when $A$ is Hermitian: then there exists a decomposition $A=Q\Lambda Q^{-1}$ where $\Lambda$ is real, and $Q$ is unitary. Numerical diagonalisation of a Hermitian matrix is implemented in many software packages.
Even if the matrix is question is not Hermitian, not all is lost: there is a big class of non-Hermitian matrices that can be easily diagonalised. This is what this post is about.

If we can find matrix $M$ such that

• $MA$ is Hermitian,
• $M$ can be decomposed as $M=LL^*$ ($L^*$ denotes conjugate transpose of $L$), where $L$ is invertible,

then there exists a decomposition $A=Q\Lambda Q^{-1}$ where $\Lambda$ is real. Note: in this post, I discuss a general case where elements of $A$ are complex numbers. All of that is also applicable, as a special case, to real-valued matrices. For them, “Hermitian” means “symmetrical”, and “unitary” means “orthogonal”. The elements of an intensity matrix of a Markov chain are, of course, real.

To find the eigendecomposition of a non-Hermitian matrix $A$, we start with the eigendecomposition of the matrix $V=L^*AL^{-*}$. We use abbreviation $L^{-*}=\left(L^*\right)^{-1}=\left(L^{-1}\right)^*$. Matrix $V$ is Hermitian, which is easy to see by checking that $V^*=V$ using the fact that $LL^*A=A^*LL^*$.

Suppose we have found the eigendecomposition

$V=P\Lambda P^{-1}$

(using our favourite matrix manipulation package). Then, by definition of $V$, we have

$L^*AL^{-*}=P\Lambda P^{-1}$.

Multiplying both sides of this equation by $L^{-*}$ on the left and $L^*$ on the right, we get

$A=L^{-*}P\Lambda P^{-1}L^*$.

Let $Q=L^{-*}P$, then $Q^{-1}=P^{-1}L^*$. Thus,

$A=Q\Lambda Q^{-1}$,

which is the eigendecomposition we were looking for.

This entry was posted in Mathematics. Bookmark the permalink.