Axler 3.C
Last edited: August 8, 2025matricies!!!!
Key Sequence
- matricies exist, you can add them, scalarly multiply them, and actually multiply them
- they can represent Linear Maps by showing where they take basis
- unsurprisingly, the set of matricies of a shape is a vector space
New Definitions
Results and Their Proofs
- sums and scalar multiplication of matricies, and why they work to represent Linear Maps
- \(\mathbb{F}^{m,n}\) is a vector space
Interesting Factoids
its literally matricies
Axler 3.D
Last edited: August 8, 2025isomorphisms. Somebody’s new favourite word since last year.
Key Sequence
- we showed that a linear map’s inverse is unique, and so named the inverse \(T^{-1}\)
- we then showed an important result, that injectivity and surjectivity implies invertability
- this property allowed us to use invertable maps to define isomorphic spaces, naming the invertable map between them as the isomorphism
- we see that having the same dimension is enough to show invertability (IFF), because we can use basis of domain to map the basis of one space to another
- we then use that property to establish that matricies and linear maps have an isomorphism between them: namely, the matrixify operator \(\mathcal{M}\).
- this isomorphism allow us to show that the dimension of a set of Linear Maps is the product of the dimensions of their domain and codomain (that \(\dim \mathcal{L}(V,W) = (\dim V)(\dim W)\))
- We then, for some unknown reason, decided that right this second we gotta define matrix of a vector, and that linear map applications are like matrix multiplication because of it. Not sure how this relates
- finally, we defined a Linear Map from a space to itself as an operator
- we finally show an important result that, despite not being true for infinite-demensional vector space, injectivity is surjectivity in finite-dimensional operators
New Definitions
Results and Their Proofs
- linear map inverse is unique
- injectivity and surjectivity implies invertability
- two vector spaces are isomorphic IFF they have the same dimension
- matricies and Linear Maps from the right dimensions are isomorphic
- \(\dim \mathcal{L}(V,W) = (\dim V)(\dim W)\)
- \(\mathcal{M}(T)_{.,k} = \mathcal{M}(Tv_{k})\), a result of how everything is defined (see matrix of a vector)
- linear maps are like matrix multiplication
- injectivity is surjectivity in finite-dimensional operators
Questions for Jana
why doesn’t axler just say the “basis of domain” directly (i.e. he did a lin comb instead) for the second direction for the two vector spaces are isomorphic IFF they have the same dimension proof?because the next steps for spanning (surjectivity) and linear independence (injectivity) is made more obviousclarify the matricies and Linear Maps from the right dimensions are isomorphic proofwhat is the “multiplication by \(x^{2}\)” operator?literally multiplying by \(x^{2}\)how does the matrix of a vector detour relate to the content before and after? I suppose an isomorphism exists but it isn’t explicitly used in the linear maps are like matrix multiplication proof, which is the whole pointbecause we needed to close the loop of being able to linear algebra with matricies completely, which we didn’t know without the isomorphism between matricies and maps
Interesting Factoids
Axler 3.E
Last edited: August 8, 2025No idea why this is so long!!!
Key Sequence
Firehose of a chapter.
- We first began an unrelated exploration in Product of Vector Spaces (“tuples”):
- we show that the Product of Vector Spaces is a vector space
- because you can build a list out of zeroing every element except each one on each basis of each element of the tuple sequentially, we learned that the dimension of the Product of Vector Spaces is the sum of the spaces’ dimension.
- we defined the product-to-sum map \(\Gamma\)
- We then tackled the fun part of this chapter, which is affine subsets, parallel structures, quotient spaces, quotient map (affine subsetification maps)
- we learned an important and useful result that two affine subsets parallel to \(U\) are either equal or disjoint (\(v-w \in U\) means \(v+U = w+U\) means \(v+U \cap w+U \neq \emptyset\), means the first thing)
- we defined the operations on quotient space, and showed that quotient space operations behave uniformly on equivalent affine subsets. This, and the usual closer proof, demonstrates that quotient spaces is a vector space
- with the help of the affine subsetification map (the quotient map \(\pi\)), we show that the dimension of a quotient space is the difference between dimensions of its constituents essentially by invoking rank-nullity theorem after knowing the fact that \(null\ \pi = U\) (because \(u+U\) is an affine subset that has not been shifted (think about a line moving along itself… it doesn’t move))
- Then, and I’m not quite sure why, we defined \(\widetilde{T}: V / null\ T \to W\), for some \(T: V\to W\), defined as \(\widetilde{T}(v+null\ T) = Tv\).
- We show that the map is Linear, injective, its range is \(range\ T\), and so it forms an isomorphism between \(V / null\ T\) and \(range\ T\).
Here’s something: products and quotients, the intuition
Axler 3.F
Last edited: August 8, 2025Because duality is fun and I’m bored and houjun-being-obtuse
.
Key Sequence
New Definitions
Results and Their Proofs
Questions for Jana
Interesting Factoids
Hello from onboard NH107! Or perhaps my next connecting flight, or from China.
Axler 5.A
Last edited: August 8, 2025EIGENSTUFF and OPERATORS! Invariant subspaces are nice.
Sometimes, if we can break the domain of a linear map down to its eigenvalues, we can understand what its doing on a component-wise level.
Key Sequence
- we defined an invariant subspace, and gave a name to 1-D invariant subspaces: the span of eigenvectors
- we showed some properties of eigenvalues and showed that a list of eigenvectors are linearly independent
- a correlate of this is that operators on finite dimensional V has at most dim V eigenvalues
- finally, we defined map restriction operator and quotient operator, and showed that they were well-defined
New Definitions
- invariant subspace
- conditions for nontrivial invariant subspace
- eigenvalues + eigenvectors + eigenspace
- two new operators: map restriction operator and quotient operator
Results and Their Proofs
- properties of eigenvalues
- list of eigenvectors are linearly independent
- quotient operator is well-defined
Questions for Jana
Interesting Factoids
“eigenvalue” is sometimes called the “characterizing value” of a map