Axler 2.A
Last edited: August 8, 2025Key Sequence
- we defined the combination of a list of vectors as a linear combination and defined set of all linear combination of vectors to be called a span
- we defined the idea of a finite-dimensional vector space vis a vi spanning
- we took a god-forsaken divergence into polynomials that will surely not come back and bite us in chapter 4
- we defined linear independence + linear dependence and, from those definition, proved the actual usecase of these concepts which is the Linear Dependence Lemma
- we apply the Linear Dependence Lemma to show that length of linearly-independent list \(\leq\) length of spanning list as well as that finite-dimensional vector spaces make finite subspaces. Both of these proofs work by making linearly independent lists—the former by taking a spanning list and making it smaller and smaller, and the latter by taking a linearly independent list and making it bigger and bigger
New Definitions
- linear combination
- span + “spans”
- finite-dimensional vector space
- polynomial
- linear independence and linear dependence
- Linear Dependence Lemma
Results and Their Proofs
- span is the smallest subspace containing all vectors in the list
- \(\mathcal{P}(\mathbb{F})\) is a vector space over \(\mathbb{F}\)
- the world famous Linear Dependence Lemma and its fun issue
- length of linearly-independent list \(\leq\) length of spanning list
- subspaces of inite-dimensional vector spaces is finite dimensional
Questions for Jana
obviously polynomials are non-linear structures; under what conditions make them nice to work with in linear algebra?what is the “obvious way” to change Linear Dependence Lemma’s part \(b\) to make \(v_1=0\) work?- for the finite-dimensional subspaces proof, though we know that the process terminates, how do we know that it terminates at a spanning list of \(U\) and not just a linearly independent list in \(U\)?
- direct sum and linear independence related; how exactly?
Interesting Factoids
I just ate an entire Chinese new-year worth of food while typing this up. That’s worth something right
Axler 2.B
Last edited: August 8, 2025Key Sequence
- we defined basis of a vector space—a linearly independent spanning list of that vector space—and shown that to be a basis one has to be able to write a write an unique spanning list
- we show that you can chop a spanning list of a space down to a basis or build a linearly independent list up to a basis
- because of this, you can make a spanning list of finite-dimensional vector spaces and chop it down to a basis: so every finite-dimensional vector space has a basis
- lastly, we can use the fact that you can grow list to basis to show that every subspace of \(V\) is a part of a direct sum equaling to \(V\)
New Definitions
Axler 2.C
Last edited: August 8, 2025Key Sequence
- Because Length of Basis Doesn’t Depend on Basis, we defined dimension as the same, shared length of basis in a vector space
- We shown that lists of the right length (i.e. dim that space) that is either spanning or linearly independent must be a basis—“half is good enough” theorems
- we also shown that \(dim(U_1+U_2) = dim(U_1)+dim(U_2) - dim(U_1 \cap U_2)\): dimension of sums
New Definitions
Results and Their Proofs
- Length of Basis Doesn’t Depend on Basis
- lists of right length are basis
- dimension of sums
Questions for Jana
Example 2.41: why is it that \(\dim U \neq 4\)? We only know that \(\dim \mathcal{P}_{3}(\mathbb{R}) = 4\), and \(\dim U \leq 4\). Is it because \(U\) (i.e. basis of \(U\) doesn’t span the polynomial) is strictly a subset of \(\mathcal{P}_{3}(\mathbb{R})\), so there must be some extension needed?because we know that \(U\) isn’t all of \(\mathcal{P}_{3}\).
Interesting Factoids
Axler 3.A
Last edited: August 8, 2025OMGOMGOMG its Linear Maps time! “One of the key definitions in linear algebra.”
Key Sequence
- We define these new-fangled functions called Linear Maps, which obey \(T(u+v) = Tu+Tv\) and \(T(\lambda v) = \lambda Tv\)
- We show that the set of all linear maps between two vector spaces \(V,W\) is denoted \(\mathcal{L}(V,W)\); and, in fact, by defining addition and scalar multiplication of Linear Maps in the way you’d expect, \(\mathcal{L}(V,W)\) is a vector space!
- this also means that we can use effectively the \(0v=0\) proof to show that linear maps take \(0\) to \(0\)
- we show that Linear Maps can be defined uniquely by where it takes the basis of a vector space; in fact, there exists a Linear Map to take the basis anywhere you want to go!
- though this doesn’t usually make sense, we call the “composition” operation on Linear Maps their “product” and show that this product is associative, distributive, and has an identity
New Definitions
- Linear Map — additivity (adding “distributes”) and homogeneity (scalar multiplication “factors”)
- \(\mathcal{L}(V,W)\)
- any polynomial map from Fn to Fm is a linear map
- addition and scalar multiplication on \(\mathcal{L}(V,W)\); and, as a bonus, \(\mathcal{L}(V,W)\) a vector space!
- naturally (almost by the same \(0v=0\) proof), linear maps take \(0\) to \(0\)
- Product of Linear Maps is just composition. These operations are:
- associative
- distributive
- has an identity
Results and Their Proofs
- technically a result: any polynomial map from Fn to Fm is a linear map
- basis of domain of linear maps uniquely determines them
Questions for Jana
- why does the second part of the basis of domain proof make it unique?
Axler 3.B
Last edited: August 8, 2025Key Sequence
- we defined the null space and injectivity
- from that, we showed that injectivity IFF implies that null space is \(\{0\}\), essentially because if \(T0=0\) already, there cannot be another one that also is taken to \(0\) in an injective function
- we defined range and surjectivity
- we showed that these concepts are strongly related by the fundamental theorem of linear maps: if \(T \in \mathcal{L}(V,W)\), then \(\dim V = \dim null\ T + \dim range\ T\)
- from the fundamental theorem, we showed the somewhat intuitive pair about the sizes of maps: map to smaller space is not injective, map to bigger space is not surjective
- we then applied that result to show results about homogeneous systems
New Definitions
Results and Their Proofs
- the null space is a subspace of the domain
- injectivity IFF implies that null space is \(\{0\}\)
- the fundamental theorem of linear maps
- “sizes” of maps
- solving systems of equations:
Questions for Jana
“To prove the inclusion in the other direction, suppose v 2 null T.” for 3.16; what is the first direction?maybe nothing maps to \(0\)