Houjun Liu

direct sum

A direct sum is a sum of subspaces (not just subsets!!) where there’s only one way to represent each element.

constituents

subspaces of \(V\) named \(U_1, \dots, U_{m}\)

requirements

The sum of subsets of \(U_1+\dots+U_{m}\) is called a direct sum IFF:

each element in \(U_1+\dots +U_{m}\) can only be written in one way as a sum \(u_1 +\dots +u_{m}\) (as in, they are linearly independent?)

We use \(\oplus\) to represent direct sum.

additional information

why is it called a direct sum?

Something is not a direct sum if any of its components can be described using the others. Its kind of line linear independence but! on entire spaces.

a sum of subsets is a direct sum IFF there is only one way to write \(0\)

Given \(U_1, \dots, U_{m}\) are subspaces of \(V\), then \(U_1+\dots +U_{m}\) is a direct sum IFF the only way to write \(0\) as a sum \(u_1 +\dots +u_{m}\) is by taking each element to \(0\).

Proof:

if— If some \(U_1 + \dots +U_{m}\) is a direct sum, definitionally there is only one way to write \(0\). And you can always write \(0\) by taking all the constituents to \(0\) as they are subspaces, so the additive identity exists.

only if— We are given that there is only one way to write \(0\), that:

\begin{equation} 0 = u_1+ u_2+ \dots+ u_{m}: u_j \in U_{j} \end{equation}

as \(U_{j}\) are all subspaces, and the additive identity exists, we can say that \(u_1=u_2=\dots =0\).

Assume for the sake of contradiction that \(U_1 + \dots +U_{m}\) is not a direct sum. Therefore:

\begin{equation} \exists\ v_1 = u_1+u_2+\dots + u_{m}: u_{j} \in U_{j} \end{equation}

and

\begin{equation} \exists\ v_1 = w_1+w_2+\dots + w_{m}: w_{j} \in U_{j} \end{equation}

“there are two unique representations of a vector given the sum of subsets

Subtracting these representations, then:

\begin{equation} (v_1-v_1) = (u_1-w_1) + \dots +(u_{m}-w_{m}): u_{j}, w_{j} \in U_{j} \end{equation}

Finally, then:

\begin{equation} 0 = (u_1-w_1) + \dots +(u_{m}-w_{m}): u_{j}, w_{j} \in U_{j} \end{equation}

We have established that each slot that makes up this particular sum \(=0\). Therefore, \(u_{i}-w_{i} = 0\). This means $ui=wi$—there are no two unique representations of \(v_{1}\). Reaching contradiction. \(\blacksquare\)

a sum of subsets is only a direct sum IFF their intersection is set containing \(0\)

Take \(U\) and \(W\), two subspaces of \(V\). \(U+V\) is a direct sum IFF \(U \cap W = \{0\}\).

Proof:

if— Suppose \(U+V\) is a direct sum. \(\forall v \in U \cap V\), as \(v\) is equal to itself, we have that:

\begin{equation} 0 = v+(-v) \end{equation}

where, \(v\) is in \(U\) and \(-v\) is in \(V\) (as both \(U\) and \(V\) are vector spaces, both would contain \(-1v=-v\) as we are given \(v \in U \cap V\) and scalar multiplication is closed on both.)

By the unique representation in the definition of direct sums, you have only one way to construct this expression: namely, that \(v=0\) as both are vector spaces so the additive identity exists on both.

Hence:

\begin{equation} \{0\} = U \cap V \end{equation}

only if— Suppose \(U \cap W = \{0\}\). Take also \(u \in U\) and \(w \in W\); we can construct an expression:

\begin{equation} u + w = 0 \end{equation}

If we can show that there is only one unique combination of \(u\) and \(w\) to write \(0\), we satisfy the previous proof and therefore \(U+W\) is a direct sum.

The expression above implies that \(w\) is the additive inverse of \(u\); therefore; \(u = -w\). As both \(U\) and \(W\) are vector spaces, their elements all have inverses. As \(u\) is the inverse of \(w\), and given the definition of sum of subsets that \(u \in U\) and \(w \in W\), \(u\) and \(w\) are both in both \(U\) and \(W\).

As the intersection of \(U\) and \(V\) is \(0\), \(u=w=0\). Therefore, there is only one unique representation of \(0\), namely with \(u=0,w=0\), making \(U+W\) a direct sum. \(\blacksquare\)

direct sum proofs are not pairwise!

Those two proofs above only deal with pairs of sum of subsets. If you have multiple subsets, they don’t apply!

every subspace of \(V\) is a part of a direct sum equaling to \(V\)

For every subspace \(U\) of a finite-dimensional \(V\), there is a subspace \(W\) of \(V\) for which \(V = U \oplus W\).

Because \(V\) is defined to be finite-dimensional, and the fact that a finite-dimensional subspace is finite-dimensional, \(U\) is finite-dimensional.

Therefore, because every finite-dimensional vector space has a basis, \(U\) has a basis \(u_1, \dots u_{m}\).

Because bases are linearly independent, and \(U \subset V\), \(u_1, \dots u_{m}\) is a linearly independent list in \(V\).

Because a linearly independent list expends to a basis, we can construct \(u_1, \dots u_{m}, w_{1}, \dots w_{n}\) as the basis of \(V\). We will construct a \(W = span(w_1, \dots w_{n})\) — the space formed as the span of the “extension” vectors to make the basis in \(V\).

Because the list \(u_{j}\dots w_{k}\) we made is a basis in \(V\), \(U+W=V\).

You can see this because every element \(v \in V\) can be constructed with a linear combination \(u_1, \dots u_{m}, w_{1}, \dots w_{n}\) (again, because this list shown to be a basis of \(V\) therefore it spans \(V\).) Then, to show that \(U+W=V\), we can collapse \(a_{1}u_1\dots + a_{m}u_{m}=u \in U\), and \(c_{1}w_1 \dots +c_{m}w_{m} = w \in W\). Hence, every element \(v \in V\) can be constructed by some \(u \in U + w \in W\), making \(U+W=V\).

Now, we have to show that the combination is a direct sum. There is a few ways of going about this, the one presented by Axler is leveraging the fact that a sum of subsets is only a direct sum IFF their intersection is set containing \(0\)—that \(U \cap W = \{0\}\).

Given some element \(v\) that lives in the intersection between \(U\) and \(W\), it must be formed as a linear combination of two linearly independent lists (as \(u_j, \dots w_{j}\) is a basis, they are linearly independent.)

Intuition: if an non-zero element lives in the intersection between two linearly independent lists which together is still linearly independent, it must be able to be written by a linear combination of other elements of that linearly independent list to live in the intersection of the two lists—which is absurd (violates the definition of linearly dependent). The only element for which this is an exception is \(0\).

Actual proof:

suppose \(v \in U \cap W\), so \(v = a_1u_1\dots +a_{m}v_{m}\) as well as \(v=b_1w_{1} + \dots b_{n}w_{n}\). Subtracting the two lists results in:

\begin{equation} 0 = a_1u_1+ \dots a_{m} u_{m} - b_1w_1+ \dots +b_{n}w_{n} \end{equation}

having already declared this list linearly independent, we see that each scalar \(a_1, \dots -b_{n}\) must equal to \(0\) for this expression. Therefore, the intersection \(v\) must be \(\{0\}\) as \(0u_1 + \dots +0u_{m}=0\).