_index.org

Computational Task

Last edited: August 8, 2025

A Computational Task:

Decision problems

  1. a decision problem: \(\Sigma^{*} \to \qty {\text{no}, \text{yes}}\)
  2. we often associate the “yes” instances of this decision problem as a \(L \subseteq \Sigma^{*}\) language

“Given boolean formula \(\varphi\), accept IFF \(\varphi\) is SAT”

Function problems

Give me a particular case of:

\begin{equation} f(w) : \Sigma^{*} \to \Sigma^{*} \end{equation}

note that there is a unique answer.

“Give a formula \(\varphi\), output lex first satisfying assignments/number of satisfying assignments”

computer number system

Last edited: August 8, 2025

bit

A computer is built out of binary gates:

So, having voltage into \(B\) allows current to pass through between \(S\) and \(D\), it could be on/off.

byte

Accumulation of \(8\) bits

Computer memory is a large array of bytes. It is only BYTE ADDRESSABLE: you can’t address a bit in isolation.

bases

Generate, each base uses digits \(0\) to \(base-1\).

We prefix 0x to represent hexadecimal, and 0b to represent binary.

conceptual grammar

Last edited: August 8, 2025

conceptual grammar is the proposed universal grammar which connects semantic primes. In theory, this grammar is universal across languages.

There are three main categories of conceptual grammars:

  • Combinatorics (connecting one idea to another)
  • Account of valancies? #what
  • Propositional complementation (location “something that happen in this place”

ConDef Abstract

Last edited: August 8, 2025

Current automated lexicography (term definition) techniques cannot include contextual or new term information as a part of its synthesis. We propose a novel data harvesting scheme leveraging lead paragraphs in Wikipedia to train automated context-aware lexicographical models. Furthermore, we present ConDef, a fine-tuned BART trained on the harvested data that defines vocabulary terms from a short context. ConDef is determined to be highly accurate in context-dependent lexicography as validated on ROUGE-1 and ROUGE-L measures in an 1000-item withheld test set, achieving scores of 46.40% and 43.26% respectively. Furthermore, we demonstrate that ConDef’s synthesis serve as good proxies for term definitions by achieving ROUGE-1 measure of 27.79% directly against gold-standard WordNet definitions.Accepted to the 2022 SAI Computing Conference, to be published on Springer Nature’s Lecture Notes on Networks and Systems Current automated lexicography (term definition) techniques cannot include contextual or new term information as a part of its synthesis. We propose a novel data harvesting scheme leveraging lead paragraphs in Wikipedia to train automated context-aware lexicographical models. Furthermore, we present ConDef, a fine-tuned BART trained on the harvested data that defines vocabulary terms from a short context. ConDef is determined to be highly accurate in context-dependent lexicography as validated on ROUGE-1 and ROUGE-L measures in an 1000-item withheld test set, achieving scores of 46.40% and 43.26% respectively. Furthermore, we demonstrate that ConDef’s synthesis serve as good proxies for term definitions by achieving ROUGE-1 measure of 27.79% directly against gold-standard WordNet definitions.