_index.org

Kosaraju's Algorithm

Last edited: October 10, 2025

Finding strongly connected components.

  1. Do DFS starting at random node to create a DFS forest (i.e. disconnected DFS trees), keeping track of finishing times
  2. Reverse all edges in the graph
  3. Do DFS again to create another DFS forest, start DFS procedure with the vertices with the largest finishing time in step 1
  4. strongly connected components are the trees that changed from the first in the second DFS graph

main idea

Recall that finishing time helps us track topological sort. After reversing edges, the condensation graph node with the largest finish time has no edges going out (i.e. it was originally the first in the topological sort); if we run DFS there, we will find exactly that component. After DFSing there, we will have “removed” that component (having visited it), and repeat.

neural network

Last edited: October 10, 2025

Neural networks are a non-linear learning architecture that involves a combination of matrix multiplication and entry-wise non-linear operations.

two layers

constituents

Consider a two layer neural network with:

  • \(m\) hidden units
  • \(d\) dimensional input \(x \in \mathbb{R}^{d}\)

requirements

\begin{align} &\forall j \in \qty {1, \dots, m}\\ &z_{j} = w_{j}^{(1)}^{T} x + b_{j}^{(1)}\\ &a_{j} = \text{ReLU}\qty(z_{j}) \\ &a = \qty(a_1, \dots, a_{m})^{T} \in \mathbb{R}^{m} \\ &h_{\theta} \qty(x) = w^{(2)}^{T} a + b^{(2)} \end{align}

sponsorship

Last edited: October 10, 2025

stochastic gradient descent

Last edited: October 10, 2025

gradient descent makes a pass over all points to make one gradient step. We can instead approximate gradients on a minibatch of data. This is the idea behind stochastic-gradient-descent.

\begin{equation} \theta^{t+1} = \theta^{t} - \eta \nabla_{\theta} L(f_{\theta}(x), y) \end{equation}

this terminates when theta differences becomes small, or when progress halts: like when \(\theta\) begins going up instead.

we update the weights in SGD by taking a single random sample and moving weights to that direction.

strongly connected components

Last edited: October 10, 2025

strongly connected components expose local communities in a graph.

constituents

graph \(V,E\)

requirements

strongly connected: for all \(v,w \in V\), there is a path from \(v \to w\), and there’s a path for \(w \to v\).

We can decompose a graph into strongly connected components where a subgraph is strongly connected. (i.e. they form equivalence class under “is strongly connected.”)

additional information

Kosaraju’s Algorithm

A way to find strongly connected components in linear time \(O\qty(n+m)\).