Posts

SU-CS229 Midterm Sheet

Last edited: October 10, 2025

SU-CS229 OCT292025

Last edited: October 10, 2025

xavier initialization

Last edited: October 10, 2025

An neural network initialization scheme that tries to avoid Vanishing Gradients.

Consider \(Wx\) step in a neural network:

\begin{equation} o_{i} = \sum_{j=1}^{n_{\text{in}}} w_{ij} x_{j} \end{equation}

The variance of this:

\begin{equation} \text{Var}\qty [o_{i}] = n_{\text{in}} \sigma^{2} v^{2} \end{equation}

expectation maximization

Last edited: October 10, 2025

Sorta like “distribution-based k-means clustering”. Jensen’s Inequality guarantees convergence (i.e. each parameter will converge to the maximum possible parameter).

constituents

A Gaussian mixture model!

requirements

Two steps:

e-step

“guess the value of \(z^{(i)}\); soft guesses of cluster assignments”

\begin{align} w_{j}^{(i)} &= p\qty(z^{(i)} = j | x^{(i)} ; \phi, \mu, \Sigma) \\ &= \frac{p\qty(x^{(i)} | z^{(i)}=j) p\qty(z^{(i)}=j))}{\sum_{l=1}^{k}p\qty(x^{(i)} | z^{(i)}=l) p\qty(z^{(i)}=l))} \end{align}

Where we have:

  • \(p\qty(x^{(i)} |z^{(i)}=j)\) from the Gaussian distribution, where we have \(\Sigma_{j}\) and \(\mu_{j}\) for the parameters of our Gaussian \(j\).
  • \(p\qty(z^{(i)} =j)\) is just \(\phi_{j}\) which we are learning

These weights \(w_{j}\) are how much the model believes it belongs to each cluster.

Kosaraju's Algorithm

Last edited: October 10, 2025

Finding strongly connected components.

  1. Do DFS starting at random node to create a DFS forest (i.e. disconnected DFS trees), keeping track of finishing times
  2. Reverse all edges in the graph
  3. Do DFS again to create another DFS forest, start DFS procedure with the vertices with the largest finishing time in step 1
  4. strongly connected components are the trees that changed from the first in the second DFS graph

main idea

Recall that finishing time helps us track topological sort. After reversing edges, the condensation graph node with the largest finish time has no edges going out (i.e. it was originally the first in the topological sort); if we run DFS there, we will find exactly that component. After DFSing there, we will have “removed” that component (having visited it), and repeat.