Posts

:q

Last edited: August 8, 2025

:sp

Last edited: August 8, 2025

:w

Last edited: August 8, 2025

(NOTES COPY) Relativization Barrier to P vs. NP

Last edited: August 8, 2025

Key Takeaway: a powerful technique, diagonalization, is doomed to fail at resolving P vs. NP.

Context

Complexity theory is generally the study of impossibilities. This is also an impossibility result. Here are some impossibility results!

  1. The reals are uncountable [Cantor 1874]
  2. Godel’s incompleteness theorem [Godel 1931]
  3. halting problem is undecidable [Turing 1936]
  4. time hierarchy theorem [Hartmanis-Stearns 1965] — \(\text{P}\) is a strict subset of \(\text{EXP}\)

Notice! All of these theorems uses a single technique—diagonalization.

(Realistic) LLMs are not super powerful

Last edited: August 8, 2025

Introduction

Large language models (LLMs) have transformed much of the field of natural language processing (NLP) (Radford et al., n.d.; Vaswani et al. 2017; Devlin et al. 2019; Raffel et al. 2023), the study of human languages. Its success, in some sense, is a little surprising: humans essentially have made for themselves a general “reasoning” algorithm (albeit a pretty bad one as of right now) using entirely inexact Bayesian modeling approaches. Exactly since LLMs developed from literature in the Bayesian modeling community, “formal” questions about what an LLM can or cannot do—the complexity-theoretic elements of LLMs as models—are comparatively understudied. Even now, we have very little idea of what an LLM is even learning, let alone trying to understand how to steer them or bound their behavior (Wu et al. 2025)—which, for a purported thinking machine, is a thing we probably want.