Thoughts on Axler 4
Last edited: August 8, 2025Because this chapter is not about linear algebra, your instructor may go through it rapidly. You may not be asked to scrutinize all the proofs. Make sure, however, that you at least read and understand the statements of all the results in this chapter—they will be used in later chapters.
So we are not going to go through everything very very carefully. Instead, I’m just going to go through some interesting results at my own leisure. This also means that this note is not very complete.
Tiago Forte
Last edited: August 8, 2025A
Time Complexity
Last edited: August 8, 2025Time complexity \(t(n): \mathbb{N} \to \mathbb{N}\), and:
\begin{equation} \text{TIME}\qty(t(n)) = \qty {\text{languages } L \mid\ \exists \text{TM that decides $L$ in $O(t(n))$ steps} } \end{equation}
notice! the Big-oh is backed into the definition. And we write:
\begin{equation} P = \bigcup_{c \in \mathbb{N}} \text{TIME}\qty(n^{c}) \end{equation}
we call this “efficient” vs. “feasible”.
\begin{equation} EXP = \bigcup_{c \in \mathbb{N}} \text{TIME}\qty(2^{n^{c}}) \end{equation}
and adding in non-determinism, we have:
\begin{equation} \text{NTIME}\qty(t(n)) = \qty {L : \exists\ \text{NTM that decides L in O(t(n)) steps}} \end{equation}
tokenization
Last edited: August 8, 2025Every NLP task involve some kind of text normalization.
- tokenizing words
- normalizing word formats (lemmatize?)
- sentence and paragraph segmentation
For Latin, Arabic, Cyrillic, Greek systems, spaces can usually be used for tokenization. Other writing systems can’t do this. See morpheme
Subword Tokenization
Algorithms for breaking up tokens using corpus statistics which acts on lower-than-word level.
- BPE
- Unigram Language Modeling tokenization
- WordPiece
They all work in 2 parst:
- a token learner: takes training corpus and derives a vocabulary set
- a token segmenter that tokenizes text according to the vocab
See also Downsides of Subword Tokenization
top-down parsing
Last edited: August 8, 2025recursive descent parsing
Given the grammar:
E -> T | T + E
T -> int | int * T | (E)
And suppose our token stream is: (int5) We will start top-level non-terminal character E
, and will try to apply the rules for E
in order…
(this is exactly how you imagine one builds a parser via recursive backtracking)
when does it break?
Consider S -> S a
, this becomes S -> S1 a
.