Dijikstra's Algorithm
Last edited: November 11, 2025constituents
- a weighted directed graph, meaning edges have weights
- where cost of a path is the sum of the weights along that path
- the shortest path is the path with the minimum cost
- starting node \(s\), target node \(t\)
Each node has two states:
- unlabeled
- labeled
And stores a piece of information:
\(d\qty(v) = \text{distance}\qty(s, v)\)
We initialize \(d\qty(v) = \infty, v \neq s\) , and \(d\qty(s) = 0\).
requirements
- pick the node \(u\) that has unlabeled state with smallest \(d\qty(u)\)
- for all neighbor v of u:
- set \(d\qty(v) = \min \qty(d\qty(v), d\qty(u) + \text{edgeWeight}\qty(u,v))\)
- mark \(v\) as labeled
def dike(G,s,t):
set verticies to State.NOTSURE
for v in G.V:
d[v] = float("+inf")
# p[v] = None (for parents)
d[s] = 0
while State.NOTSURE in G.V:
# get node with minimum distance that's not sure
u = argmin(d[v] for v in G.V if v.state == State.NOTSURE)
for v in u.out_neighbors:
d[v] = min(d[v], d[u] + edgeWeight(u,v))
if d[v] was changed:
# p[v] = u (for obtaining next path in chain for shortes paths)
u.state = State.SURE
return d[t]
additional information
proof
Let’s start with a lemma
Algorithms Index
Last edited: November 11, 2025Lectures
Divide and Conquer
Sorting
- merge sort: SU-CS161 SEP252025
- recurrence solving: SU-CS161 SEP302025
- median: SU-CS161 OCT022025
- randomized algos + quicksort: SU-CS161 OCT072025
- linear time sorting: SU-CS161 OCT092025
Data Structures
- red-black trees: SU-CS161 OCT142025
- hashing: SU-CS161 OCT212025
Graphs
- DFS/BFS: SU-CS161 OCT232025
- Strongly connected components: SU-CS161 OCT282025
- Dijikstra: SU-CS161 OCT302025
DP
- bellman-ford and Floyd-Warshall: SU-CS161 NOV112025
- more DP LCS, knapsack, independent set: SU-CS161 NOV132025
Greedy Algorithms
Closing
all-paris
Last edited: November 11, 2025Bellman-Ford Algorithm
Last edited: November 11, 2025Bellman-Ford Algorithm is a dynamic programming problem that solves single-source shortest path.
d[v] = \infty
d[s] = 0
for i = 0, ..., n-2:
for v in v:
d_prev = d
d[v] = min(d_prev[v], min(d_prev[u] + w(u,v) for u in v.in_neighbors))
Notice you need \(O\qty(n)\) space (2 \(d\) rounds, the previous round and the next round), and runtime is \(O\qty(nm)\) (outer \(n\) loop, inner is an iteration between \(deg\qty(v)*|v| = |e| = m\); that is, we have an minimum over the degree of each \(v\) for every \(v\), which adds up to the total number of edges.)
dynamic programming
Last edited: November 11, 2025dynamic programming is a three-step algorithm to tackle large, multi-step problems; high level idea: guessing + caching + recursion. dynamic programming can sometimes not be good enough, and it doesn’t really give us fast enough to get what we need to use. That’s when we need to deal with relaxation, or possibly greedy programming.
constituents
optimal sub-structure
- big problems break up into sub-problems
- optimal solutions can be explained in terms of sub-problems solution
overlapping sub-problems
- many different entries of each solution use the same sub-problem answer
- …meaning,
requirements
bottom-up approach
- initialize a table
- fill it
top-down approach
- do the recursive approach
- memoize problem solutions during recursive call
additional information
main steps of dynamic programming
- Break a hard problem into sub-problems
- Guess what sub-problem to solve
- Solve the sub-problem and store the solution
- Repeat #2 and #3
- Combine sub-problem solutions to solve the hard problem
analyzing runtime of dynamic programming
To analyze runtime of dynamic programming problems, you ask:
