SU-CS361 Stochastic Methods, Population Methods, and Constraints Index
Last edited: August 8, 2025SU-CS361: Derivatives, Bracketing, Descent, and Approximation Index
Last edited: August 8, 2025- Formal Formulation of Optimization
- constraint
- types of conditions
- Derivatives
- Directional Derivatives
- numerical methods
- exact methods: autodiff
- Bracketing (one dimensional optimization schemes)
- Descent Direction Iteration
- First-Order Methods
- Second-Order Methods
- Newton’s Method
- or approximate it using Secant Method
- Direct Methods
- Cyclic Coordinate Search
- Accelerated Coordinate Search
- Powell’s Method
- Hooke-Jeeves Search
- Generalized Pattern Search
- opportunistic search
- dynamic ordering
- Nelder-Mead Simplex Method
SU-ENG7CE Creative Writing
Last edited: August 8, 2025SU-ENGR76 APR022024
Last edited: August 8, 2025Clarke’s Third Law
Sufficiently advance technology is indistinguishable from magic
SU-ENGR76 APR042024
Last edited: August 8, 2025information
Information is the amount of surprise a message provides.
Shannon (1948): a mathematical theory for communication.
information value
The information value, or entropy, of a information source is its probability-weighted average surprise of all possible outcomes:
\begin{equation} H(X) = \sum_{x \in X}^{} s(P(X=x)) P(X=x) \end{equation}
properties of entropy
- entropy is positive: \(H(x) \geq 0\)
- entropy of uniform: for \(M \sim CatUni[1, …, n]\), \(p_{i} = \frac{1}{|M|} = \frac{1}{n}\), and \(H(M) = \log_{2} |M| = \log_{2} n\)
- entropy is bounded: \(0 \leq H(X) \leq H(M)\) where \(|X| = |M|\) and \(M \sim CatUni[1 … n]\) (“uniform distribution has the highest entropy”); we will reach the upper bound IFF \(X\) is uniformly distributed.
binary entropy function
For some binary outcome \(X \in \{1,2\}\), where \(P(x=1) = p_1\), \(P(X_2 = 2) = 1-p_1\). We can write:
