Outcome Uncertainty
Last edited: August 8, 2025action outcomes are uncertain
overfitting
Last edited: August 8, 2025consider something like a polynomial interpolation:
Interpolating polynomial (or most ML models in general) are smooth, and so interpolating between points will result in “overshooting” regional points and “bouncing around”
P vs. NP
Last edited: August 8, 2025Polynomial Time \(P\) vs Non-Polynomial Time \(NP\)
- \(P\) are the problems that can be efficiently solved
- \(NP\) are the problems where proposed solutions can be efficiently verified
so! is \(P=NP\)?
If this is true, there are some consequences:
- proving can be automated
- cryptography would be able to be automated easily
anywhere there is a problem can be globally optimized. This is too good to be true, so probably \(P \neq NP\).
p(T)
Last edited: August 8, 2025We can use the scalars of a polynomial to build a new operator, which scales copies of an operator with the coefficients \(a_{j}\) of the polynomial.
constituents
- \(p(z) = a_{0} + a_{1}z + a_{2}z^{2} + \cdots + a_{m}z^{m}\), a polynomial for \(z \in \mathbb{F}\)
- \(T \in \mathcal{L}(V)\)
requirements
\(p(T)\) is an operator refined by:
\begin{equation} p(T) = a_{0} I + a_{1} T + a_{2} T^{2} + \cdots + a_{m} T^{m} \end{equation}
where, \(T^{m}\) is the power of operator
PAC Learning
Last edited: August 8, 2025probably approximately correct (PAC) learning is a DFA learning scheme. Suppose you have a concept \(c \in C\). We desire to find a hypothesis \(h \in H\) which gets as close to the boundary between concepts as possible.
We want to minimize false positives and false negatives.
constituents
- Instance space \(X\)
- Concept class \(C\) (functions over \(X\))
- Hypothesis class \(H\) (functions over \(X\))
- “proper learning” \(H=C\) —we are done
- \(A\) PAC-learns \(C\) if
- \(\forall c \in C, \forall D \sim X\), when \(A\) gets inputs sampled from \(D\) and outputs \(h \in H\), we want…
\begin{equation} P_{A} [ P_{x \in D}[h(x) \neq c(x)] > \delta] < \epsilon \end{equation}