Posts

LetsDrive

Last edited: August 8, 2025

Autonomously driving is really hard. How do we integrate planning + learning in a close-loop style. We’ll start from the current belief, and construct a tree of all reachable belief state.

Recall DESPOT.

Approach

  1. learning (b): a neural network which maps the driving history into a policy and value
  2. planning (a): we will use the neural network’s derived policy and value to run MCTS
  3. execution (e): execute the actions in a simulator

The data which is obtained using the simulator is used to train the neural network.

level set

Last edited: August 8, 2025

occasionally, you can’t really get a specific solution.

\begin{equation} \dv{y}{t} = e^{t}\cos y \end{equation}

after doing the , you get:

\begin{equation} \ln (\sec y + \tan y) - e^{t} = C \end{equation}

you get sets of this function \(F(t,y)\) which shifts it up and down, by any constant C.

But at any given \((t,y)\), you get a slope \(e^{t}\cos y\).

Lexicalization Hypothesis

Last edited: August 8, 2025

The Lexicalization Hypothesis is a hypothesis proposed by Chomsky that states that syntactic transformations can only apply on syntatic constituents; therefore, the rules of putting words together is different from the rules that puts phrases together. This theory stands in opposition to generative semantics.

There are two versions of the Lexicalization Hypothesis:

Strong Lexicalization Hypothesis

The Strong Lexicalization Hypothesis states that both derivational words (changes meaning, bench=>benching) or inflectional words (changes grammar, eat=>eating) cannot be put together via syntatical rules. (Geeraerts 2009)

Lexicon

Last edited: August 8, 2025

Lexicon are pre-labeled datasets which pre-organize words into features. They are useful when training data is sparse.

Instead of doing word counts, we compute each feature based on teh the token’s assigned label in the lexicon.

Liberal Center

Last edited: August 8, 2025
  • Poster-modern search for individualism