NUS-MATH530 5.A Problem 35/36
Last edited: August 8, 2025Warmup: 35
Suppose \(V\) is finite dimensional, \(T \in \mathcal{L}(V)\) and \(U\) is invariant under \(T\). Prove each eigenvalue of \(T / U\) is an eigenvalue of \(T\).
Now, \(\lambda\) is an eigenvalue of \(T / U\). That is:
\begin{equation} Tv + U = \lambda v + U \end{equation}
Meaning:
\begin{equation} (T-\lambda I) v \in U, \forall v \in V \end{equation}
Suppose for the sake of contradiction \(\lambda\) is not an eigenvalue of \(T\). This means no \(\lambda\) such that \(Tv = \lambda v\); specifically, that means also no \(\lambda\) such that \(T|_{u} u = \lambda u\). Now, that means \(T|_{u} - \lambda I\) is invertible given finite dimensional \(V\).
NUS-MATH530 5.C Problem 7
Last edited: August 8, 2025Suppose \(T \in \mathcal{L}(V)\) has a diagonal matrix \(A\) w.r.t. some basis of \(V\), and that \(\lambda \in \mathbb{F}\). Prove that \(\lambda\) appears on the diagonal of \(A\) precisely \(\dim E(\lambda, T)\) times.
Aside: “to appear on the diagonal \(n\) times”
We want to begin by giving a description for what “appearing on the diagonal” of a diagonal matrix implies.
A diagonal matrix is a special-case upper-triangular matrix, so a value being on its diagonal implies it to be an eigenvalue.
NUS-MATH530 Changing Bases
Last edited: August 8, 2025Standard Bases Back and Fourth
To map the vectors from \(B_2\) back to the standard bases, we simply have to construct the map:
\begin{equation} \mqty(2 & 1 & 2 \\ 1& 1& -1 \\ 1 & -1 & 0) \end{equation}
Each of the “standard” vectors in the new basis, when applied to this matrix, gets moved back to their original representation.
Presumably, then, moving “forward” into the new space is simply taking the inverse of this vector, which we will do separately; its inverse is:
NUS-MATH530 Geometric Intepretations
Last edited: August 8, 2025Dot product
Calculations
Let’s calculate some dot products!
\begin{equation} \begin{pmatrix} 1 \\ 0 \end{pmatrix} \cdot \begin{pmatrix} 0 \\ 1 \end{pmatrix} = 0 \end{equation}
\begin{equation} \begin{pmatrix} 1 \\2 \end{pmatrix} \cdot \begin{pmatrix} 2 \\1 \end{pmatrix} = 4 \end{equation}
\begin{equation} \begin{pmatrix} 1 \\ 1 \end{pmatrix} \cdot \begin{pmatrix} -1 \\1 \end{pmatrix} = 0 \end{equation}
\begin{equation} \begin{pmatrix} 1 \\1 \end{pmatrix} \cdot \begin{pmatrix} 2 \\ 2 \end{pmatrix} = 4 \end{equation}
Interpretation
Geometrically, the intepretation of the dot product is the magnitude that comes from scaling the bottom projected value by the top value. This is essentially multiplying the proportion of one vector that’s parallel to the other by each other.
NUS-MATH530 Geometric Multiplicity
Last edited: August 8, 2025Let \(\lambda_{m}\) be an eigenvalue for \(T\) an operator on complex finite-dimensional \(V\). Let \(m\) be the geometric multiplicity of \(\lambda_{m}\). We desire that the algebraic multiplicity is at least \(m\). Let \(\dim v = n\).
We have that \(m\) is the geometric multiplicity of \(\lambda_{m}\), meaning:
\begin{equation} \dim E(\lambda_{m}, T) = m \end{equation}
This means we can take \(m\) linearly independent eigenvectors from \(V\). Extend this list now to a basis of \(V\) with \(v_1, …v_{m}, u_{1}, u_{n-m}\).
