monad
Last edited: August 8, 2025We can abstract common part of language features such as state and exception. It allows programming these features in pure lambda calculus.
A monad \(M a\) is an abstract type—
the “normal” type is \(a\), and we have the semantics hidden in \(M\).
- return: \(a \to Ma\)
- bind: \(M a \to (a \to M b) \to M b\) — it takes a monad, a function that takes the unwrapped thing and hands back a new monad, and returns that
bind is written with \(v \gg = f\) for monad \(v\) and function \(f: a\to M b\)
Monetarist theory
Last edited: August 8, 2025Monetarist theory is a theory of economics proposed by Milton Freedman which asserts that Keynsian economics only applies in the limited case that central bank need to keep the money supply growing; otherwise, the free market can handle itself.
Therefore the Monetarist theorists propose that the stock market crash of 1929 was caused that the US monetary fund did a bad job of actually controlling the funds, and didn’t inject enough money into economy.
monitor pattern
Last edited: August 8, 2025monitor pattern is a multithreading pattern to help prevent race conditions and deadlocks.
associate a single lock with a collection of variables (a “class”), having one lock associated with the group.
any time when you want to access anything in that group, you unlock the mutex associated with the group. meaning, there’s only one mutex which can be used to change shared state.
Bridge Crossing
There is cars that are crossing a one lane bridge: each car in a thread, they have to coordinate when/where to cross the bridge.
monte-carlo tree search
Last edited: August 8, 2025- \(\mathcal{P}\) problem (states, transitions, etc.)
- \(N\) visit counts
- \(Q\) a q-table: action-value estimates
- \(d\) depth (how many next states to look into)—more is more accurate but slower
- \(U\) value function estimate; usually a Rollout Policy, estimate at some depth \(d\)
- \(c\) exploration constant
After \(n\) simulation s from the starting state; we find the best action for our current state from our q-table.
Subroutine: simulate(state, depth_remaining)
- If
depth_remaining=0
, simply return the utility from the value function estimate - For some
s, Actions
that we just got, if we haven’t seen it, we just return the value function estimate + initialize the N and Q tables - select an action via the monte-carlo exploration formula
- sample a next state and current reward based on the action you gotten via a generative model
value = reward + discount*simulate(next_state, depth_remaining-1)
- add to the
N(state, action)
count - update the q table at (state, action):
Q[s,a] + = (value-Q[s,a])/N[s,a]
(“how much better is taking this action?” — with later times taking this action more heavily discounted)
monte-carlo exploration
\begin{equation} \max_{a} Q(s,a) + c \sqrt{ \frac{\log \sum_{a}N(s,a)}{N(s,a)}} \end{equation}
Montgomery Bus Boycott
Last edited: August 8, 2025The fallout of the Rosa Parks incident, which is when many of Montgomery residents.
The boycotts were developed by Martin Luther King.