Posts

Gauss' Law

Last edited: August 8, 2025

The Gauss’ Law is a principle of electric flux of uniformly distributed electric field along a surface: that, the electric flux through a closed surface is the sum of the electric charge enclosed divided by the permittivity of free space.

That is:

\begin{equation} \oint E \cdot dA = \frac{\sum Q}{\epsilon_{0}} \end{equation}

somewhat motivating Gauss’ Law

Consider a sphere with uniformly distributed charge on its surface. It has surface area \(4 \pi r^{2}\). Given the expression of electric flux and the fact that the origin change is in the center, and the test change is evenly distributed (i.e. \(E\) is held constant):

Gaussian

Last edited: August 8, 2025

The Gaussian, in general, gives:

\begin{equation} e^{-\frac{ax^{2}}{2}} \end{equation}

which is a Bell-Shaped curve. It’s pretty darn important

solving heat equation without boundary

for general expression:

\begin{equation} \pdv{U}{t} = \alpha \pdv[2]{U}{x} \end{equation}

\begin{equation} U(t,x) = \frac{1}{\sqrt{4\pi \alpha t}}\int_{\mathbb{R}} f(y) e^{-\frac{(x-y)^{2}}{4\alpha t}} \dd{y} \end{equation}

where,

\begin{equation} \hat{U}(t,\lambda) = \hat{f}(\lambda)e^{-\alpha t \lambda^{2}} \end{equation}

\begin{equation} \hat{U}(t,\lambda) = \hat{f}(\lambda)e^{-\lambda^{2}(t)} \end{equation}

Heat Equation and Gaussian

\begin{equation} H(t,x) = \frac{1}{\sqrt{2\pi} t}e^{-\frac{x^{2}}{2t}} \end{equation}

You will note that \(H\) does satisfy the heat equation:

Gaussian distribution

Last edited: August 8, 2025

\begin{equation} \mathcal{N}(x|\mu, \Sigma) = \qty(2\pi)^{-\frac{n}{2}} |\Sigma|^{-\frac{1}{2}} \exp \qty(-\frac{1}{2} \qty(x-\mu)^{\top} \Sigma^{-1}(x-\mu)) \end{equation}

where \(\Sigma\) is positive semidefinite

conditioning Gaussian distributions

For distributions that follow Gaussian distributions, \(a, b\), we obtain:

\begin{align} \mqty[a \\ b] \sim \mathcal{N} \qty(\mqty[\mu_{a}\\ \mu_{b}], \mqty(A & C \\ C^{\top} & B)) \end{align}

meaning, each one can be marginalized as:

\begin{align} a \sim \mathcal{N}(\mu_{a}, A) \\ b \sim \mathcal{N}(\mu_{b}, B) \\ \end{align}

Conditioning works too with those terms, for \(a|b\):

Gaussian elimination

Last edited: August 8, 2025

The point of Gaussian elimination is to solve/identiy-ify a linear equation. Take, if you have a matrix expression:

\begin{equation} Ax = b \end{equation}

We can apply \(A^{-1}\) to both side, we then have:

\begin{equation} A^{-1}Ax = A^{-1} b \end{equation}

Applying the definition of the identity:

\begin{equation} Ix = A^{-1}b \end{equation}

Therefore, to solve for some \(A^{-1}\), which would yield \(x\).

Gaussian mixture model

Last edited: August 8, 2025

Gaussian models are typically unimodal, meaning they have one peak (things decrease to the left of that peak, increases to the right of it).

Therefore, in order to model something more complex with multiple peaks, we just weighted average multiple gaussian models

\begin{equation} p(x | \dots ) = \sum_{i-1}^{n}p_i \mathcal{N}(x | u_{i}, {\sigma_{i}}^{2}) \end{equation}

where we want our weights \(p_{j}\) to sum up ultimate to \(1\) because we want the ultimate thing to still integrate to \(1\).