_index.org

PAC Learning

Last edited: August 8, 2025

probably approximately correct (PAC) learning is a DFA learning scheme. Suppose you have a concept \(c \in C\). We desire to find a hypothesis \(h \in H\) which gets as close to the boundary between concepts as possible.

We want to minimize false positives and false negatives.

constituents

  • Instance space \(X\)
  • Concept class \(C\) (functions over \(X\))
  • Hypothesis class \(H\) (functions over \(X\))
    • “proper learning” \(H=C\) —we are done
  • \(A\) PAC-learns \(C\) if
    • \(\forall c \in C, \forall D \sim X\), when \(A\) gets inputs sampled from \(D\) and outputs \(h \in H\), we want…

\begin{equation} P_{A} [ P_{x \in D}[h(x) \neq c(x)] > \delta] < \epsilon \end{equation}

PACE

Last edited: August 8, 2025

PACE is a form of Directed Evolution: which use a bacteriophage with a bit of its gene removed; then, engineer a virus to infect the bacteria, which will only successfully complete infection if the missing area is provided.

The mutation of this virus, then, essentially RNG-s mutations of new functions and will only produce successful new generations of bacteriologist when it works.

PACE is hard

The only way to check that PACE worked in the direction you want is by sampling the bacteria and hope that they are evolving in the correct direction

Pacific Railroad Act

Last edited: August 8, 2025

pagin:q

Last edited: August 8, 2025

papyrus

Last edited: August 8, 2025