Posts

SU-CS361 APR302024

Last edited: August 8, 2025

Multi-Objective Optimization

  1. identify non-dominated individuals (individuals, for which in the multi-objective, is not dominated); this forms the “pareto frontier”
  2. create all combinations of input parameters, and create a pareto frontier for them
  3. identify a weighting between the variations you desire, and identify the elements which align with the Pareto frontier

Pareto Optimality

At a pareto optimal point, increasing one objective value decreases another. that is, a pareto optimal point is not dominated.

dominate

one point dominates another if:

SU-CS361 MAY022024

Last edited: August 8, 2025

Sampling Plans

Many methods requires knowing a series of samples of the objective value to calculate local model or population methods, so…

Full Factorial

Grid it up.

  • easy to implement
  • good results
  • bad: sample count grows exponentially with dimension

Random Sampling

Use a pseudorandom generator to pick points in your space.

  • allows for any number of evaluations you specify
  • statistically, the points clump when you do this!
  • also need lots of samples to get good coverage

Uniform Projection

We take each point, and uniformly project it onto each dimension. To implement this, we grid up each dimension and shuffle the ordering of each dimension individually. Then, we read off the coordinates to create the points:

SU-CS361 MAY072024

Last edited: August 8, 2025

Generalization Error

\begin{equation} \epsilon_{gen} = \mathbb{E}_{x \sim \mathcal{X}} \qty[\qty(f(x) - \hat{f}(x))^{2}] \end{equation}

we usually instead of compute it by averaging specific points we measured.

Probabilistic Surrogate Models

Gaussian Process

A Gaussian Process is a Gaussian distribution over functions!

Consider a mean function \(m(x)\), and a covariance (kernel) function \(k(x, x’)\). And, for a set of objective values \(y_{j} \in \mathbb{R}\), which we are trying to infer using \(m\) and \(k\).

\begin{equation} \mqty[y_1 \\ \dots \\ y_{m}] \sim \mathcal{N} \qty(\mqty[m(x_1) \\ \dots \\ m(x_{m})], \mqty[k(x_1, x_1) & \dots & k(x_1, x_{m}) \&\dots&\\ k(x_{m}, x_{1}) &\dots &k(x_{m}, x_{m})]) \end{equation}

SU-CS361 MAY092024

Last edited: August 8, 2025

optimization uncertainty

  • irreducible uncertainty: uncertainty inherent to a system
  • epistemic uncertainty: subjective lack of knowledge about a system from our standpoint

uncertainty can be presented as a vector of random variables, \(z\), where the designer has no control. Feasibility of a design point, then, depends on \((x, z) \in \mathcal{F}\), where \(\mathcal{F}\) is the feasible set of design points.

set-based uncertainty

set-based uncertainty treats uncertainty \(z\) as belonging to some set \(\bold{Z}\). Which means that we typically use minimax to solnve:

SU-CS361 Midterm 1 Review

Last edited: August 8, 2025
  1. error complexity in big \(O\) of finite-difference Midpoint Method, Forward Search, backward search (also what are those)
  2. Fibonacci Search and Golden Section Search
  3. Bisection Method
  4. Shubert-Piyavskill Method
  5. Trust Region Methods equations
  6. Secant Method
  7. the Nelder-Mead Simplex Method chart