Houjun Liu

raising e to a matrix

Let’s compute what \(e^{tA}\) should look like, where \(t\) is some scalar and \(A\) is a diagonalizable matrix. This is a supplement to Second-Order Linear Differential Equations.

Let \(v_1\dots v_{m}\) be the eigenvectors of \(A\). Let \(\lambda_{1}\dots\lambda_{m}\) be the eigenvalues.

Recall that we can therefore diagonalize \(A\) as:

\begin{equation} A = \mqty(v_1& \dots& v_{m})\mqty(\dmat{\lambda_{1}, \dots, \lambda_{m}})\mqty(v_1& \dots& v_{m})^{-1} \end{equation}

read: change of choordinates into the eigenbases, scale by the eigenvalues, then change back to normal choordinates.

Now, imagine if we are multiplying \(A\) by itself manymany times; what will that look like?

\begin{equation} A^{n} = \mqty(v_1& \dots& v_{m})\mqty(\dmat{\lambda_{1}, \dots, \lambda_{m}})\mqty(v_1& \dots& v_{m})^{-1}\mqty(v_1& \dots& v_{m})\mqty(\dmat{\lambda_{1}, \dots, \lambda_{m}})\mqty(v_1& \dots& v_{m})^{-1} \dots \end{equation}

The middle parts, nicely, cancels out! Its a matrix applied to its inverse! So, we get rid of it

\begin{equation} A^{n} = \mqty(v_1& \dots& v_{m})\mqty(\dmat{\lambda_{1}, \dots, \lambda_{m}})\mqty(\dmat{\lambda_{1}, \dots, \lambda_{m}})\mqty(v_1& \dots& v_{m})^{-1} \dots \end{equation}

Now, we are multiplying diagonal matricies against itself! If you work out the mechanics of matrix multiplication, you will note that each element simply gets scaled to higher powers (the matricies are diagonal!)! So then, we have:

\begin{equation} A^{n} = \mqty(v_1& \dots& v_{m})\mqty(\dmat{{\lambda_{1}}^{n}, \dots, {\lambda_{m}}^{n}})\mqty(v_1& \dots& v_{m})^{-1} \end{equation}

Nice.

Recall also the Tayler expasion of \(e^{x}\); we will apply it to to \(e^{tA}\):

\begin{equation} e^{tA} = \sum_{k=0}^{\infty} \frac{1}{k!}(tA)^{k} = \sum_{k=0}^{\infty} \frac{t^{k}}{k!}A^{k} \end{equation}

Ok. We now apply our definition of \(A^{n}\) derived above:

\begin{equation} e^{tA} = \sum_{k=0}^{\infty} \frac{t^{k}}{k!}\mqty(v_1& \dots& v_{m})\mqty(\dmat{{\lambda_{1}}^{k}, \dots, {\lambda_{m}}^{k}})\mqty(v_1& \dots& v_{m})^{-1} \end{equation}

See now that \(\mqty(v_1 & \dots &v_{m})\) and its inverse is both constant in the sum, so we take it out:

\begin{equation} e^{tA} = \mqty(v_1& \dots& v_{m})\qty(\sum_{k=0}^{\infty}\frac{t^{k}}{k!} \mqty(\dmat{{\lambda_{1}}^{k}, \dots, {\lambda_{m}}^{k}}))\mqty(v_1& \dots& v_{m})^{-1} \end{equation}

And now, the actual mechanics of adding a matrix is just adding it elementwise, so we will put the summations into the matrix:

\begin{equation} e^{tA} = \mqty(v_1& \dots& v_{m})\mqty(\dmat{\sum_{k=0}^{\infty}\frac{t^{k}}{k!} {\lambda_{1}}^{k}, \dots, \sum_{k=0}^{\infty}\frac{t^{k}}{k!} {\lambda_{m}}^{k}})\mqty(v_1& \dots& v_{m})^{-1} \end{equation}

Note now that each value in that matrix is just the Tayler expansion of \(e^{k_{\lambda_{j}}}\) (take a moment to pause if this is not immediately obvious; think about what each element in that diagonal matrix look like and what the Tayler polynomial \(e^{x}\) should look like. Perhaps what some arbitrary \(e^{ab}\) should looks like.

\begin{equation} e^{tA} = \mqty(v_1& \dots& v_{m})\mqty(\dmat{e^{t\lambda_{1}}, \dots, e^{t\lambda_{m}}})\mqty(v_1& \dots& v_{m})^{-1} \end{equation}