matricies are like buckets of numbers. ok, ok, seriously:

matricies are a way of encoding the basis of domain proof: that if Linear Maps are determined uniquely by where they map the basis anyways, why don’t we just make a mathematical object that represents that to encode the linear maps.

## definition

Let \(n\), \(m\) be positive integer. An \(m\) by \(n\) matrix \(A\) is a rectangular array of elements of \(\mathbb{F}\) with \(m\) rows and \(n\) columns:

\begin{equation} A = \mqty(A_{1,1} & \dots & A_{1,n} \\ \vdots && \vdots \\ A_{m,1} & \dots & A_{m,n}) \end{equation}

the matrix representing a Linear Map \(T\) is noted as \(\mathcal{M}(T)\). This maybe basis specific; see matrix of Linear Map for more.

## additional information

### matrix of Linear Map

This result codifies the claim that matricies represent Linear Maps by what they do to the basis of the space of concern.

Suppose \(T \in \mathcal{L}(V,W)\), and \(v_1, \dots, v_{n}\) is a basis of \(V\); and \(w_1, \dots w_{m}\) is a basis of \(W\). Then, the matrix of \(T\) with respective to these basis is the \(m\) by \(n\) (rows by columns!) where:

\begin{equation} Tv_{k} = A_{1,k}w_1 + \dots + A_{m,k}w_{m} \end{equation}

Quick memory of this result: inputs across columns, outputs across rows; think about how matrix is multiplied: you smash the input vector horizontally, across the top columns and down. Therefore, a matrix is written as: each columns contains the instructions of where to send each input basis, written as a linear combination down each row of that column of the output basis

IF the basis being used in the matrix is unclear (i.e. if we had a change of basis, so didn’t use the standard basis, etc.), then the matrix of a *SPECIFIC* set of basis is written as: \(\mathcal{M}(T, (v_1, \dots, v_n), (w_1, \dots, w_{m}))\).

### matrix of a vector

The matrix of a vector is just an encoding of scalars which needed to scale the basis of the space to add up to that vector.

More formally—

Suppose \(v \in V\), and \(v_1 \dots v_{n}\) is a basis of \(V\). The matrix representing vector \(v\) is the n-by-1 matrix:

\begin{equation} \mathcal{M}(v) = \mqty(c_1 \\ \dots \\ c_{n}) \end{equation}

where \(c_1 \dots c_{n}\) are the scalars such that:

\begin{equation} v = c_1v_1 + \dots +c_{n}v_{n} \end{equation}

### column notation

One can use a dot to index matricies’ columns and rows.

Suppose \(A\) is an \(m\) by \(n\) matrix.

- AT \(1 \leq j \leq m\), \(A_{j ,.}\) denotes the \(1\) by \(n\) matrix consisting only row \(j\) of \(A\)
- AT \(1 \leq k \leq n\), \(A_{. ,k}\) denotes the \(m\) by \(k\) matrix consisting only column \(k\) of \(A\)

### sums and scalar multiplication of matricies

According to Jana, a third grader can add and scalar multiply matricies. So I am not going to write them here.

However, what’s interesting is the fact that they actually work:

- Suppose \(S,T \in \mathcal{L}(V,W)\), then \(\mathcal{M}(S+T) = \mathcal{M}(S)+\mathcal{M}(T)\)
- Suppose \(\lambda \in \mathbb{F}, T \in \mathcal{L}(V,W)\), then \(\mathcal{M}(\lambda T) = \lambda \mathcal{M}(T)\)

The verification of this result, briefly, is that:

Recall that matricies encode where each input basis get sent, as a linear combination of the output basis, down each column; recall that \((S+T)v = Sv+Tv\); now, write the sum of the matrix without performing the sum; apply the basis to the matrix; distribute the basis choordinates across the sum, seperate into two matricies. Now we have the sum of the matrix is equal to \(Sv + Tv\); then invoke definition of sum of Linear Map.

scalar multiplication works in the same darn way.

### matrix multiplication

### \(\mathbb{F}^{m,n}\)

For \(m\) and \(n\) positive integers, the set of all \(m,n\) matricies with entries in \(\mathbb{F}\) is called \(\mathbb{F}^{m,n}\).

This is a vector space! “obviously” its basis is the set of all matrix with \(1\) in one slot and \(0\) in all others. There are \(m\cdot n\) of those matricies so \(\dim \mathbb{F}^{m,n}=m\cdot n\).

### invertability

### elementary matrix

elementary matricies are slight variations from the identity matrix which performs the elementary row operations:

- swap rows
- add a row to another
- scale rows

### determinants

See determinants

### Gaussian elimination

### diagonal matrix

see diagonal matrix

### upper-triangular matricies

### change-of-basis

To change the basis of \(A\) to w.r.t. \(B\), create a similar matrix:

\begin{equation} B^{-1} A B = C \end{equation}

\(C\) is \(A\) in terms of \(B\).