Posts

gradient descent

Last edited: September 9, 2025

It’s hard to make globally optimal solution, so therefore we instead make local progress.

constituents

  • parameters \(\theta\)
  • step size \(\alpha\)
  • cost function \(J\) (and its derivative \(J’\))

requirements

let \(\theta^{(0)} = 0\) (or a random point), and then:

\begin{equation} \theta^{(t+1)} = \theta^{(t)} - \alpha J’\qty (\theta^{(t)}) \end{equation}

“update the weight by taking a step in the opposite direction of the gradient by weight”. We stop, btw, when its “good enough” because the training data noise is so much that like a little bit non-convergent optimization its fine.

Human Health Index

Last edited: September 9, 2025

Lecture

insertion sort

Last edited: September 9, 2025

insertion sort is an algorithm that solves the sorting problem.

constituents

a sequence of \(n\) numbers \(\{a_1, \dots a_{n}\}\), called keys

intuition

Say you have a sorted list; sticking a new element into the list doesn’t change the fact that the list is sorted.

requirements

Insertion sort provides an ordered sequence \(\{a_1’, \dots a_{n}’\}\) s.t. \(a_1’ \leq \dots \leq a_{n}’\)

void insertion_sort(int length, int *A) {
    for (int j=1; j<length; j++) {
        int key = A[j];

        // insert the key correctly into the
        // sorted sequence, when appropriate
        int i = j-1;

        while (i > 0 && A[i] > key) { // if things before had
                                      // larger key
            // move them
            A[i+1] = A[i]; // move it down
            // move our current value down
            i -= 1;
        }

        // put our new element into the correct palace
        A[i+1] = key;
    }
}

This is an \(O\qty(n^{2})\) algorithm.

Locally-Weighted Regression

Last edited: September 9, 2025

Malcom X and MLK Index

Last edited: September 9, 2025