Axler 5.B
Last edited: August 8, 2025Key Sequence
- we began the chapter defining \(T^m\) (reminding ourselves the usual rules of \(T^{m+n} = T^{m}T^{n}\), \((T^{m})^{n} = T^{mn}\), and, for invertible maps, \(T^{-m} = (T^{-1})^{m}\)) and \(p(T)\), wrapping copies of \(T\) into coefficients of a polynomial, and from those definitions showed that polynomial of operator is commutative
- we then used those results + fundamental theorem of algebra to show that operators on complex vector spaces have an eigenvalue
- that previous, important result in hand, we then dove into upper-triangular matricies
- specifically, we learned the properties of upper-triangular matrix, that if \(v_1 … v_{n}\) is a basis of \(V\) then \(\mathcal{M}(T)\) is upper-triangular if \(Tv_{j} \in span(v_1, … v_{j})\) for all \(j \leq n\); and, equivalently, \(T\) in invariant under the span of \(v_{j}\)
- using that result, we show that every complex operator has an upper-triangular matrix
- using some neat tricks of algebra, we then establish that operator is only invertible if diagonal of its upper-triangular matrix is nonzero, which seems awfully unmotivated until you learn that…
- eigenvalues of a map are the entries of the diagonal of its upper-triangular matrix, and that basically is a direct correlary from the upper-triangular matrix of \(T-\lambda I\)
New Definitions
- \(T^m\)
- \(p(T)\)
- technically also product of polynomials
- matrix of an operator
- diagonal of a matrix
- upper-triangular matrix
Results and Their Proofs
- \(p(z) \to p(T)\) is a linear function
- polynomial of operator is commutative
- operators on complex vector spaces have an eigenvalue
- properties of upper-triangular matrix
- every complex operator has an upper-triangular matrix
- operator is only invertible if diagonal of its upper-triangular matrix is nonzero
- eigenvalues of a map are the entries of the diagonal of its upper-triangular matrix
Questions for Jana
why define the matrix of an operator again??just to stress that its square- for the second flavor of the proof that every complex operator has an upper-triangular matrix, why is \(v_1 … v_{j}\) a basis of \(V\)?
Interesting Factoids
Its 12:18AM and I read this chapter for 5 hours. I also just got jumpscared by my phone notification. What’s happening?
Axler 5.C
Last edited: August 8, 2025Key Sequence
- we defined an eigenspace, which is the space of all eigenvalues of a distinct eigenvector, and show that
- they form a direct sum to the whole space
- and, as a correlate to how direct sums are kind of like disjoint sets, we have the perhaps expected result of dimension, that the sum of the eigenspace’ dimensions must be smaller than or equal than that of \(V\)
- we defined a Diagonal Matrix, which by its structure + calculation can be shown to require that it is formed by a basis of eigenvalues
- and from there, and the properties of eigenspaces above, we deduce some conditions equal to diagonalizability
- a direct correlary of the last point (perhaps more straightforwardly intuited by just lining eigenvalue up diagonally in a matrix) is that enough eigenvalues implies diagonalizability
New Definitions
Results and Their Proofs
- eigenspaces are a direct sum
- dimension of sum of eigenspaces is smaller than or equal to the dimension of the whole space
- conditions equal to diagonalizability
- enough eigenvalues implies diagonalizability
Questions for Jana
- for diagonalizability, shouldn’t \(n\) be \(m\) on item 3?
Interesting Factoids
Short!
Axler 6.A
Last edited: August 8, 2025Hear ye, hear ye! Length and angles are a thing now!!
Key Sequence
- We remembered how dot products work, then proceeded to generalize them into inner products—this is needed because complex numbers don’t behave well when squared so we need to add in special guardrails
- We then learned that dot products is just an instantiation of Euclidean Inner Product, which itself is simply one of many inner products. A vector space that has a well-defined inner product is now called an Inner Product Space
- Along with revisiting our definition of dot products to include complexes, we changed our definition of norm to be in terms of inner products \(\sqrt{\langle v,v \rangle}\) to help support complex vector spaces better; we then also redefined orthogonality and showed a few results regarding them
- Then, we did a bunch of analysis-y work to understand some properties of norms and inner products: Pythagorean Theorem, Cauchy-Schwartz Inequality, triangle inequality and parallelogram equality.
New Definitions
- dot product and the more generally important…
- inner product!, Euclidean Inner Product, Inner Product Space
- norm
- orthogonal
- and now, a cornucopia of analysis
Results and Their Proofs
- properties of the dot product
- properties of inner product
- properties of the norm
- orthogonality and \(0\)
Questions for Jana
- How much of the analysis-y proof work do we have to remember for the analysis-y results?
Interesting Factoids
Axler 6.B
Last edited: August 8, 2025OMG its Gram-Schmidtting
Key Sequence
- we defined lists of vectors that all have norm 1 and are all orthogonal to each other as orthonormal; we showed orthonormal list is linearly independent by hijacking pythagoras
- of course, once we have a finitely long linearly independent thing we must be able to build a basis. The nice thing about such an orthonormal basis is that for every vector we know precisely what its coefficients have to be! Specifically, \(a_{j} = \langle v, e_{j} \rangle\). That’s cool.
- What we really want, though, is to be able to get an orthonormal basis from a regular basis, which we can do via Gram-Schmidt. In fact, this gives us some useful correlaries regarding the existance of orthonormal basis (just Gram-Schmidt a normal one), or extending a orthonormal list to a basis, etc. There are also important implications (still along the veins of “just Gram-Schmidt it!”) for upper-traingular matricies as well
- We also learned, as a result of orthonormal basis, any finite-dimensional linear functional (Linear Maps to scalars) can be represented as an inner product via the Riesz Representation Theorem, which is honestly kinda epic.
New Definitions
- orthonormal + orthonormal basis
- Gram-Schmidt (i.e. orthonormalization)
- linear functional and Riesz Representation Theorem
Results and Their Proofs
- Norm of an Orthogonal Linear Combination
- An orthonormal list is linearly independent
- An orthonormal list of the right length is a basis
- Writing a vector as a linear combination of orthonormal basis
- Corollaries of Gram-Schmidt
- Riesz Representation Theorem
Questions for Jana
Interesting Factoids
Axler 7.A
Last edited: August 8, 2025This is not actually like a proper review of a chapter, instead, it is an opinionated review of what I think Jana thinks Axler thinks is important about 7.A.
Note that all of the “proofy things” in this section are poofy because problems with putting trips prior to the end of the year.
Here’s an outline:
- We defined the adjoint
- We learned some properties of the adjoint; importantly, that \((A+B)^{*} = A^{*} + B^{*}\), \((AB)^{*} = B^{*} A^{*}\), \((\lambda T)^{*} = \bar{\lambda}T^{*}\); a correlary is that \(M^{*}M\) is self-adjoint
- We defined normal, self-adjoint, and unitary
- With those definitions, we showed that eigenvalues of self-adjoint matricies are real
- Then, we created two mildly interesting intermediate results
- Over \(\mathbb{C}\), \(Tv\) is orthogonal to all \(v\) IFF \(T\) is the zero matrix
- Over \(\mathbb{R}\), \(Tv\) is orthogonal to all \(v\) and \(T\) is self-adjoint, then \(T\) is the zero matrix
- The latter of which shows that Eigenvectors of \(T\) corresponding to distinct eigenvalues are orthogonal if \(T \in \mathcal{L}(V)\) is normal
- This, all, builds up to the result of the Complex Spectral Theorem, which you should know
adjoint
Suppose \(T \in \mathcal{L}(V,W)\), we define the adjoint as a \(T^{*} \in \mathcal{L}(W,V)\) that: