Posts

Universal Hash Family

Last edited: October 10, 2025

\begin{equation} H = \qty {h: \qty {0,1}^{h} \to \qty {0,1}^{k}} \end{equation}

properties of pairwise independence:

  1. 1 wise independent: \(\forall y, \forall a\), \(P_{h \sim H}\qty [h\qty(y) = a] = 2^{-k}\)
  2. 2 wise independent: \(\forall y \neq y’\), \(P_{h \sim H}\qty [h\qty(y) = h\qty(y’)] = 2^{-k}\)

You will notice that this is not full randomness, just that there are some amount of randomness.

requirements

For all \(u_{i}, u_{j} \in U\) in the universe, \(n\) buckets in terms of desired randomness, with \(u_{i} \neq u_{j}\), then:

Basics of ML for 224n

Last edited: October 10, 2025

Random thoughts as I scan through the book:

Central framing: learning as a means to generalize + predict

Key Tasks

  • regression (predict a value)
  • binary classification (sort an example into a boolean class of Y/N)
  • multi-class classification (sort an example into multiple classes)
  • ranking (sorting an object into relevant order)

Optimality of Bayes Optimal Classifier

If you have the underlying data distribution, we can classify \(y\) from \(x\) by choosing:

SU-CS229 OCT132025

Last edited: October 10, 2025

Key Sequence

Notation

New Concepts

Important Results / Claims

Questions

Interesting Factoids

SU-CS229 OCT202025

Last edited: October 10, 2025

Key Sequence

Notation

New Concepts

Important Results / Claims

Questions

Interesting Factoids

density estimation

Last edited: October 10, 2025