eigenvalue is the scalar needed to scale the basis element of a one dimensional invariant subspace of a Linear Map to represent the behavior of the map:

\begin{equation} Tv = \lambda v \end{equation}

Note we require \(v \neq 0\) because otherwise all scalars count.

eigenvector is a vector that forms the basis list of length 1 of that 1-D invariant subspace under \(T\).

“operators own eigenvalues, eigenvalues own eigenvectors”

Why is eigenvalue consistent per eigenvector? Because a linear map has to act on the same way to something’s basis as it does to the whole space.

## Motivation

Take some subspace \(U \subset V\):

\begin{equation} U = \{\lambda v\ |\ \lambda \in \mathbb{F}, v \in V\} = span(v) \end{equation}

Now, if \(T|_{U}\) is an operator on \(U\), \(U\) would be an invariant subspace of \(T\) of dimension 1 (its basis being the list \(\{v\}\)).

Therefore, for some vector \(v \in U\) (basically like various scalings of \(v\)), \(T\) will always send back to \(U\) so we can represent it yet again with another scalar on \(v\), like \(\lambda v\).

In this case, then, we can write that:

\begin{equation} Tv = \lambda v \end{equation}

And then the usual definition of eigenvalues persist.

## constituents

- linear map \(T \in \mathcal{L}(V)\)
- vector \(v \in V\), such that \(v \neq 0\)
- scalar \(\lambda \in \mathbb{F}\)

## requirements

If there exists \(v \in V\) such that \(v\neq 0\) and:

\begin{equation} Tv = \lambda v \end{equation}

then, \(\lambda\) is called an eigenvalue, and \(v\) the eigenvector.

## additional information

### properties of eigenvalues

Suppose \(V\) in finite-dimensional, \(T \in \mathcal{L}(V)\) and \(\lambda \in \mathbb{F}\), then:

- \(\lambda\) is an eigenvalue of \(T\)
- \(T - \lambda I\) is not injective
- \(T - \lambda I\) is not surjective
- \(T - \lambda I\) is not invertable

Showing one shows all.

Proof:

#### \(1 \implies 2\)

Suppose \(\lambda\) is an eigenvalue of \(T\). Then, we have some \(v \in V\) such that:

\begin{equation} Tv = \lambda v \end{equation}

Now:

\begin{align} &Tv = \lambda v \\ \Rightarrow\ & Tv - \lambda v = 0 \\ \Rightarrow\ & Tv - \lambda Iv = 0 \\ \Rightarrow\ & (T-\lambda I)v = 0 \end{align}

the last step by \((T+S)v = Tv+Sv\), the property of the vector space of \(\mathcal{L}(V)\) (or any \(\mathcal{L}\)).

And therefore, \(v \in null\ (T-\lambda I)\), and \(v\neq 0\). And so \(null\ (T-\lambda I) \neq \{0\}\) and so \(T-\lambda I\) is not injective, as desired.

The reverse of this result shows the opposite direction that \(1 \implies 2\).

#### The others

\(I \in \mathcal{L}(V)\), \(T \in \mathcal{L}(V)\), \(\mathcal{L}(V)\) is closed, so \((T - \lambda I) \in \mathcal{L}(V)\), and so it is an operator. Having 2) implies all other conditions of non-injectivity, non-surjectivity, non-invertiblility by injectivity is surjectivity in finite-dimensional operators

### list of eigenvectors are linearly independent

Let \(T \in \mathcal{L}(V)\), suppose \(\lambda_{j}\) are distinct eigenvalues of \(T\), and \(v_1, \ldots, v_{m}\) the corresponding eigenvectors, then \(v_1, \ldots, v_{m}\) is linearly independent.

proof:

We will show this by contradiction. Suppose \(v_1, \ldots, v_{m}\) are linearly dependent; then, by the Linear Dependence Lemma, \(\exists v_{j}\) such that:

\begin{equation} v_{j} \in span(v_1, \dots, v_{j-1}) \end{equation}

Meaning:

\begin{equation} v_{j} = a_1v_1 + \dots + a_{j-1}v_{j-1} \end{equation}

Given the list is a list of eigenvalues, we can apply \(T\) to both sides to get:

\begin{equation} \lambda_{j}v_{j} = a_1\lambda_{1}v_1 + \dots + a_{j-1}\lambda_{j-1}v_{j-1} \end{equation}

We can also get another definition for \(\lambda_{j} v_{j}\) by simply multiplying the definition for \(v_{j}\) above by \(\lambda_{j}\):

\begin{align} &v_{j} = a_1v_1 + \dots + a_{j-1}v_{j-1}\ \text{from above} \\ \Rightarrow\ & \lambda_{j} v_{j} = a_1\lambda_{j}v_1 + \dots + a_{j-1}\lambda_{j}v_{j-1} \end{align}

Now, subtracting our two definitions of \(\lambda_{j} v_{j}\), we get:

\begin{equation} 0 = a_1 (\lambda_{j} - \lambda_{1})v_{1} + \dots +a_{j-1} (\lambda_{j} - \lambda_{j-1})v_{j-1} \end{equation}

Recall now that the eigenvalue list \(\lambda_{j}\) are distinct. This means all \(\lambda_{j} - \lambda_{k \neq j} \neq 0\). No \(v_{j} =0\); so if we choose the smallest positive integer for \(j\), the list before it \(v_1, \dots, v_{j-1}\) is linearly independent (as no value in that list would satisfy the Linear Dependence Lemma). This makes \(a_{j} =\dots =a_{j-1} = 0\).

And yet, substituting this back into the expression for \(v_{j}\), we have \(v_{j} = 0\), reaching contradiction. So therefore, the list of eigenvectors are linearly independent. \(\blacksquare\)

#### operators on finite dimensional V has at most dim V eigenvalues

As a corollary of the above result, suppose \(V\) is finite dimensional; then, each operator on \(V\) has at most \(dim\ V\) distinct eigenvalues because their eigenvectors form an linearly independent list and length of linearly-independent list \(\leq\) length of spanning list.

#### eigenspaces are disjoint

the eigenspaces of a Linear Map form a direct sum:

proof:

Corollary of result above. Because eigenvectors (i.e. bases) from distinct eigenspaces are linearly independent. So the only way to write \(0\) is by taking each to \(0\). So by taking the bases all to \(0\), you take the \(0\) vector from each space, which shows that the eigenspaces are a direct sum. \(\blacksquare\)

### finding eigenvalues with actual numbers

\begin{equation} \lambda_{j} \in Spec(T) \Rightarrow det(\lambda_{j}I-T) = 0 \end{equation}

The right polynomial \(det(\lambda_{j} I-T) = 0\) is named the “characteristic polynomial.”

### natural choordinates of a map

Given the eigenvectors \((x+,y+), (x-,y-)\), we can change coordinates of your matrix into the natural choordinates.

\begin{equation} A = \begin{pmatrix} x+ & x- \\y+ & y- \end{pmatrix} \begin{pmatrix} \lambda+ & 0 \\ 0 & \lambda- \end{pmatrix} \begin{pmatrix} x+ & x- \\y+ & y- \end{pmatrix}^{-1} \end{equation}

This makes scaling matricides much much easier. If you think about multiplying the above matrix \(n\) times, the inverse and non-inverse cancells out.

### similar matrices

Let \(A,B\) be defined:

\begin{equation} A = C B C^{-1} \end{equation}

and of course:

\begin{equation} B = C^{-1} B C \end{equation}

where, \(A,B,C \in \mathcal{L}(V)\)

\(A, B\) has the same eigenvalues.

### invertable matricies

Let \(T \in \mathcal{L}(V)\) be invertable. If \(\lambda\) is an eigenvalue of \(T\), then \(\frac{1}{\lambda}\) is an eigenvalue of \(T\). Furthermore, \(T\) and \(T^{-1}\) share eigenvectors with eigenvalues \(\lambda\) and \(\frac{1}{\lambda}\)

### symmetric matricies have a real basis of eigenvalues

this falls out of the real spectral theorem.