_index.org

tokenization

Last edited: August 8, 2025

Every NLP task involve some kind of text normalization.

  1. tokenizing words
  2. normalizing word formats (lemmatize?)
  3. sentence and paragraph segmentation

For Latin, Arabic, Cyrillic, Greek systems, spaces can usually be used for tokenization. Other writing systems can’t do this. See morpheme

Subword Tokenization

Algorithms for breaking up tokens using corpus statistics which acts on lower-than-word level.

  • BPE
  • Unigram Language Modeling tokenization
  • WordPiece

They all work in 2 parst:

  • a token learner: takes training corpus and derives a vocabulary set
  • a token segmenter that tokenizes text according to the vocab

See also Downsides of Subword Tokenization

top-down parsing

Last edited: August 8, 2025

recursive descent parsing

Given the grammar:

E -> T | T + E
T -> int | int * T | (E)

And suppose our token stream is: (int5) We will start top-level non-terminal character E, and will try to apply the rules for E in order…

(this is exactly how you imagine one builds a parser via recursive backtracking)

when does it break?

Consider S -> S a, this becomes S -> S1 a.

Train a Bert

Last edited: August 8, 2025

So you want to train a Bert? Either your name is david or you stumbled upon this from the internet. well hello. I’m tired and its 1AM so like IDFK if this will be any accurate at all. Oh and if you are reading this because you are an NLPer, I apologize for the notation its 1am.

CLS Tokens

A Bert is a bi-directional transformer encoder model. A Transformer encoder takes a sequence of tokenized input text (each being an embedding), and produces a dense embedding for each token. Hence, for each word vector \(w \in W \subset \mathbb{R}^{n}\), a Bert \(B\) performs a mapping \(\mathcal{L}\qty(W, \mathbb{R}^{m})\) applied onto each input token.

Train-Test Split

Last edited: August 8, 2025
  • training data: to train the network
  • validation data: to select the best model/fitting strategy
  • test data: show how much error you have on it to not bias the model any choices (to give information on how the model actually performs)

Training Data Sourcing

Last edited: August 8, 2025

Finding training data for AI is hard. So instead:

Intentional training data

  • curated for training data
  • Spent time thinking about bias, control, etc.

Training set of convenience

  • Dataset that just comes about
  • Problematic:

Accidentally introduce bias into the data: Googling images of CEOs, which is convenient, results in all white males for a bit.