Posts

fundamental theorem of arithmetic

Last edited: August 8, 2025

factorization motivator

If \(p\) is prime and \(p | ab\), then \(p|a\) or \(p|b\).

If \(p|a\), we are done.

Consider the case where \(p|ab\) yet \(a\) is not divisible by \(p\). Then, \(a\) and \(p\) are coprime. This means that, we have:

\begin{equation} \gcd (a,p) = 1 = s a + tp \end{equation}

We note that:

\begin{align} b &= 1 \cdot b \\ &= (sa+tp) b \\ &= sab + tpb \\ &= s(ab) + tb(p) \end{align}

Fundamental Theorem of Calculus

Last edited: August 8, 2025

Lovely, well known result:

\begin{equation} \dv x \int_{a}^{x} f(t)\dd{t} = f(x) \end{equation}

for any fixed \(a\). This is because that’s functionally using \(a\) as a \(+C\) term.

fundamental theorem of linear maps

Last edited: August 8, 2025

The dimension of the null space plus the dimension of the range of a Linear Map equals the dimension of its domain.

This also implies that both the null space (but this one’s trivial b/c the null space is a subspace of the already finite-dimensional domain) and the range as well is finite-dimensional.

constituents

requirements

\begin{equation} \dim V = \dim null\ T + \dim range\ T \end{equation}

fusion (machine learning)

Last edited: August 8, 2025

fusion in machine learning is the process of adding features or encoding.

late fusion

late fusion adds features together to a model in a multi-modal approach by first embedding the features separately

early fusion

early fusion adds features together to a model in a multi-modal approach by concatenating the features first then embedding

FV-POMCPs

Last edited: August 8, 2025

Main problem: joint actions and observations are exponential by the number of agents.

Solution: Smaple-based online planning for multiagent systems. We do this with the factored-value POMCP.

  • factored statistics: reduces the number of joint actions (through action selection statistics)
  • factored trees: reduces the number of histories

Multiagent Definition

  • \(I\) set of agents
  • \(S\) set of states
  • \(A_{i}\) set of states for each agent \(i\)
  • \(T\) state transitions
  • \(R\) reward function
  • \(Z_{i}\) joint observations for each agents
  • \(O\) set of observations

Coordination Graphs

you can use sum-product elimination to shorten the Baysian Network of the agent Coordination Graphs (which is how agents influnece each other).