_index.org

SU-CS161 Review

Last edited: December 12, 2025

Key Questions

  • Does it work?
  • Is it fast?
  • Can I do better?

General Structure

  • worst-case analysis: what’s the performance on the worst input
  • Big-oh notation

Syllabus

Divide and Conquer

We typically proof these’s correctness via induction, and we proof runtime by substitution method or master theorem.

Randomized Algorithm

quicksort: worst-case input, we use randomness after input is chosen

sorting with a model of computation

data structures

DFS/BFS

Shortest Paths

DP

dynamic programming has a recipe: identify optimal substructure, write down the recursive formulation, fill in table to figure out full answer.

all-paris

Last edited: November 11, 2025

dynamic programming

Last edited: November 11, 2025

dynamic programming is a three-step algorithm to tackle large, multi-step problems; high level idea: guessing + caching + recursion. dynamic programming can sometimes not be good enough, and it doesn’t really give us fast enough to get what we need to use. That’s when we need to deal with relaxation, or possibly greedy programming.

constituents

optimal sub-structure

  • big problems break up into sub-problems
  • optimal solutions can be explained in terms of sub-problems solution

overlapping sub-problems

  • many different entries of each solution use the same sub-problem answer
  • …meaning,

requirements

bottom-up approach

  1. initialize a table
  2. fill it

top-down approach

  1. do the recursive approach
  2. memoize problem solutions during recursive call

additional information

main steps of dynamic programming

  1. Break a hard problem into sub-problems
  2. Guess what sub-problem to solve
  3. Solve the sub-problem and store the solution
  4. Repeat #2 and #3
  5. Combine sub-problem solutions to solve the hard problem

analyzing runtime of dynamic programming

To analyze runtime of dynamic programming problems, you ask:

greedy algorithm

Last edited: November 11, 2025

Make choices one at a time, never look back.

when does one greedy?

When problem exhibit a particularly nice optimal substructure: “at every step, when we make a choice, we do not rule out an optimal solution; when we get to the end, we got a solution, and thus it must be optimal.” After we made all of our choices, if we haven’t ruled out an optimal solution, we would have found an optimal solution.

knapsack

Last edited: November 11, 2025

unbounded knapsack

Suppose I have infinite copies of all items, what’s the most valuable way to fill the knapsack?

DP, with subproblems containing smaller knapsack. K[x] = max_i { K[x-wi] + vi }

def solve():
    dp[0] = 0
    sol[0] = []

    for x in 1 ... knapsack_size:
        dp[x] = 0

        for i in 1 ... num_items:
            if weight(i) <= x:
                # why max base case instead of incrementally adding stuff?
                # a bigger knapsack that fits 2 things can be find a sa
                dp[x] = max(dp[x], dp[x-weight(i)]+value(i))
                if dp[x] changed:
                    sol[x] = sol[x-weight(i)] + [i]

0/1 knapsack

Suppose I have one copy of each item; what’s the most valuable way to fill the knapsack? We have to keep track of a 2D DP table. Casework: