Posts

Euler-Bernoulli Theory

Last edited: August 8, 2025

The Euler-Bernoulli Theory is a theory in dynamics which describes how much a beam deflect given an applied load.

Assumptions

For Euler-Bernoulli Theory to apply in its basic form, we make assumptions.

  • The “beam” you are bending is modeled as a 1d object; it is only long and is not wide
  • For this page, \(+x\) is “right”, \(+y\) is “in”, and \(+z\) is “up”
  • Probably more, but we only have this so far.
  • the general form of the Euler-Bernoulli Theory assumes a freestanding beam

Basic Statement

The most basic for the Euler-Bernoulli Equation looks like this:

Euler's Equation

Last edited: August 8, 2025

\begin{equation} f(x) = e^{ix} = \cos (x) + i\sin (x) \end{equation}

this brings a circle of radius one, because in every point, velocity is orthogonal to where you are (because \(f’(x) = if(x)\), and multiplying by \(i\) accounts for a rotation of 90 degrees.

And so,

\begin{equation} z = re^{i\theta} \end{equation}

gives any point in the imaginary polar plane.

Europe

Last edited: August 8, 2025

evaluation

Last edited: August 8, 2025

our ultimate goal is to create a generalized model that learns training data and extrapolate to future test data.

We don’t really care about how good we fit the training data.

key idea: fit the model on train set, and test on separate test set.

requirements

We split our training set into three parts

  • training set: to fit the model
  • validation set: quasi-test set
  • test set: actual test (we do it only once)

additional information

root-mean-square error

this is basically least-squares error but with normalization

evaulating model fitness

Last edited: August 8, 2025

We want to compare features of the model to features of the data:

Visual diagnostics

  1. PDF plot
  2. CDF of data vs. CDF of model
  3. Quantile-Quantile plot
  4. Calibration Plot

Summative Metrics

  1. KL Divergence
  2. Expected Calibration Error
  3. Maximum Calibration Error

Marginalization Ignores Covariances

Notice on the figure on the right captures distribution much better, yet the marginal distributions don’t show this. This is because marginalizing over the datasets ignores the covariances. Hence, remember to keep dimensions and any projections hould capture covariances, etc.