theorem of alternatives
Last edited: February 2, 2026For two systems of inequality and equality constraints.
- weak alternatives: if no more than one system is feasible
- strong alternatives if exactly on of them is feasible
The theorem of alternatives says two inequalities are either weak or strong alteratives.
- \(x > a\), \(x < a - 1\) are weak alteratives,
- \(x > a\), \(x <a\) are strong alteratives
convex optimization
Last edited: January 1, 2026Tractable optimization problems.
We can solve these problems reliably and efficiently.
constituents
- where \(x \in \mathbb{R}^{n}\) is a vector of variables
- \(f_{0}\) is the objective function, “soft” to be minimized
- \(f_{1} … f_{m}\) are the inequality constraints
- \(g_{1} … g_{p}\) are the inequality constraints
requirements
Generally of structure:
\begin{equation} \min f_{0}\qty(x) \end{equation}
subject to:
\begin{align} &f_{i} \qty(x) \leq 0, i = 1 \dots m \\ &Ax = b \end{align}
CVXPY
Last edited: January 1, 2026CVXPY allows us to cast convex optimization tasks into OOP code.
\begin{align} \min \mid Ax - b \mid^{2}_{2} \end{align}
object to:
\(x \geq 0\)
import cvxpy as cp
A,b = ...
x = cp.Variable(n)
obj = cp.norm2(A@x - b)**2
constraints = [x >= 0]
prob = cp.Problem(cp.Minimize(obj), constraints)
prob.solve()
How it works
- starts with the optimization problem \(P_{1}\)
- applies a series of problem transformation \(P_{2} … P_{N}\)
- final problem \(P_{N}\) should be one of Linear Program, Quadratic Program, SOCP, SDP
- calls a specialized solver on \(P_{N}\)
- retrieves the solution of the original problem by reversing transformations
Disciplined Convex Programming
Last edited: January 1, 2026Specify objective as:
- minimize scalar convex expression
- maximize scalar concave expression
and constraints:
- convex expr <= concave expr
- concave expr >= convex expr
- affine expr = affine expr
curvatures of all expressions are DCP certified. We do this because then you can just subtract the expressions and you’ll have a good time.
you certify DCP based on general composition rule that preserve convexity
DCP is sufficient, not necessary
Consider:
\begin{equation} f\qty(x) = \sqrt{1+x^{2}} \end{equation}
geometric programming
Last edited: January 1, 2026A geometric program:
\begin{align} \min_{x}\quad & f_{0}\qty(x) \\ \textrm{s.t.} \quad & f_{i}\qty(x) \leq 1, i= 1\dots m \\ & h_{i}\qty(x) = 1, i = 1 \dots p \end{align}
where \(f_{i}\) is posynomial, and \(h_{i}\) monomial. Notice that taking a log of this thing transforms the monomial into an affine function in \(\log \qty(x)\), and into a logsumexp for posynomials. This also implies that solving for optimal \(\log \qty(x)\), which is same as solving for \(x\) for positive \(x\), is convex problem.
