Houjun Liu

Numerical Approximation Schemes

Consider a general non-linear First Order ODEs:

\begin{equation} x’ = F(x) \end{equation}

Suppose we have some time interval, we have some solutions to the expression given. Is it possible for us to, given \(x(t_0) = x_0\), what \(x(t_0+T)\) would be? Can we approximate for explicit numbers?

The solutions have to exist for all time: blow-up cannot be present during numerical estimations.

Explicit Euler Method

\begin{equation} x(t+h) \approx x_{t+1} = x_{t} + h f(x_t) \end{equation}

motivation

recall that given \(x(t_0) = x_0\), we desire \(x(t_0+T)\).

  1. divide your solution interval into \(N\) small intervals; each interval would have length \(h= \frac{T}{N}\)
  2. let \(t_{i} = t_0 + i \frac{T}{N}\), where \(t_{N} = t_{0}+T\)
  3. for each segment \(t_{i}\), we attempt to compute a \(x_{i}\), and we’d like to approximate the error between \(x_{i}\) and \(x(t_{i})\).

In the explicit Euler method, we make piecewise linear approximations. At each \(x_0\), we follow the slope estimated via the ODE at that point. Specifically:

\begin{equation} x’(t) = \lim_{k \to 0} \frac{x(t+k)-x(t)}{k} \approx \frac{x(t+h)-x(t)}{h} \end{equation}

for some small \(h\). Meaning, specifically, \(x(t+h) \approx x(t) + h x’(t)\), where \(h\) is the step size we computed before.

Consider that we had an ODE that is \(x’ = F(x)\), whech gives us:

\begin{equation} x_1 = x_{0}+ h f(x_0) \approx x(t_0 + h) \end{equation}

Following this scheme, we can calculate from \(x_0\) all the way stepwise to \(x_{N}\).

evaluation

Situation: we have \(X_{N}\), we have \(x(t_{N})\), how close are they? In fact:

\begin{equation} |x_{N} - x(t_{n}) | \leq Ch \end{equation}

We have some constant \(C(x_0, t_0, T, f)\), which we can use to estimate \(C\) the bounds specific to the problem you are solving.

stiffness

Certain parts of a solution maybe decaying/oscillating very different from another part of the solution—

example

Consider a system:

\begin{equation} y’ = \mqty(-1 & 0 \\ 0 & -10)y \end{equation}

our solutions look like:

\begin{equation} y(t) = \mqty(c_1 e^{-t} \\ c_2 e^{-10t}) \end{equation}

so the top expression gives \(x_i = (1-h)^{i} x_0\) and bottom \(x_{i} = (1-10h)^{i}x_0\), which means they will have different requirements for \(h\) to be able to converge

example 2

\begin{equation} y’ = -5 (y-\cos x) \end{equation}

with method of undetermined coefficients, we obtain:

\begin{equation} y = \frac{25}{26} \cos t + \frac{5}{26} \sin t + Ce^{-5t} \end{equation}

the first parts are fine and not stiff at all, the third part, we realize that we need \((1-5h)^{i}x_0\), meaning we need \(h < \frac{1}{5}\).

motivation

Let’s consider:

\begin{equation} x’ = -\lambda x \end{equation}

The explicit Euler gives out:

\begin{equation} x_{t+1} = (1-\lambda h)x_{i} \end{equation}

meaning, in general:

\begin{equation} x_{i} = (1-\lambda h)^{i} x_0 \end{equation}

We know the function is bound to decay, yet the Explicit Euler will give us that this decays only when:

\begin{equation} -1 < 1-\lambda h < 1 \end{equation}

Implicit Euler Method doesn’t have this problem—

consider:

\begin{equation} x_{i+1} = x_{i} - \lambda h x_{i+1} \end{equation}

meaning:

\begin{equation} x_{i} = \frac{1}{(1+\lambda h)^{i}}x_0 \end{equation}

Implicit Euler Method

A small twist on the Explicit Euler Method. To be able to use this method, we can formulate this as:

\begin{equation} x_{i+1} - h f(x_{i+1}) = x_i \end{equation}

where we use Newton’s Method to estimate some input \(i+1\) for which the above statement gets to \(x_{i}\).

evaluation

We actually didn’t do that much error; its is still bounded by:

\begin{equation} |x_{N} - x(t_{n}) | \leq Ch \end{equation}

Derivation

\begin{equation} \frac{x((t+h)-h) - x(t+h)}{-h} \approx x’(t+h) \end{equation}

this is first-order Taylor Approximation written backwards

This also yields:

\begin{equation} \frac{x((t+h)-h) - x(t+h)}{-h} = \frac{x(t+h)-x((t+h)-h)}{h} \end{equation}

Now, let \(t = t_0\), and therefore we have \(t_1 = t +h\), this gives us that:

Now, recall that, because \(f\) is the ODE:

\begin{equation} x’(t_1) = f(x(t_1)) = x’(t+h) \approx \frac{x(t_1) - x(t_0)}{h} \end{equation}

Multiplying \(h\) to both sides gives:

\begin{equation} hf(x(t_1)) = x(t_1) - x(t_0) \end{equation}

which gives:

\begin{equation} x(t_0) = x(t_1) - h f(x(t_1)) \end{equation}

we will now attempt to estimate \(x_1\) by declaring \(x_1 := x(t_{1})\), which will give us:

\begin{equation} x_1 - h f(x_1) = x_0 \end{equation}

Let us call \(G(x_{1}) = x_1 - h f(x_1) = x_0\).

Finally, we run Newton’s Method to solve the \(x_1\) such that we can obtain \(x_0\) by trying to find the zeros of \(G(x_1) - x_0\). Because \(h\) is small, a good initial guess is actually \(G(x_0)\), and then we can optimize.

Trapezoidal Method

\begin{equation} x_{t+1} = x_t + h \frac{f(x_{t+1})+f(x_t)}{2} \end{equation}

motivation

“averaging smoothed things out”:

\begin{equation} \frac{x(t+h) - x(t)}{h} \approx \frac{f(x(t+h)) + f(x(t))}{2} \end{equation}

meaning we have:

\begin{equation} \frac{x_1-x_0}{h} = \frac{f(x_1) + f(x_0)}{2} \end{equation}

which averages our derivatives out.

Cross-multiplying, this gives:

\begin{equation} x_1 - \frac{1}{2}h f(x_1) = x_0 + \frac{1}{2} h f(x_0) \end{equation}

which can also be written as, multiplying by some \(h\):

\begin{equation} x_1 = x_0 + h \frac{f(x_1)+f(x_0)}{2} \end{equation}

explicitly

\begin{equation} x_{i} = \qty( \frac{(1- \frac{1}{2}\lambda h)}{(1+ \frac{1}{2}\lambda h)})^{i} x_0 \end{equation}

evaluation

Importantly, this gives bounds

\begin{equation} |x_{N} - x(t_{n}) | \leq Ch^{2} \end{equation}

Modified Euler Method

This is also called “Midpoint Method”.

This is one of thee methods which doesn’t break during “stiff” ODEs, and converges \(h^{N}\) times quickly.

For some:

\begin{equation} \dv{x}{t} = f(t,x) \end{equation}

\begin{equation} x_{i+1} = x_{i} + h f\qty(t_{i} + \frac{1}{2}h, x_{i} + \frac{1}{2}h f(t_{i}, x_{i})) \end{equation}

this is motivated by the Trapezoidal Method, but

“A thorough introduction to these methods requires additional background in approximation theory and numerical analysis”

  • The Book

error

\begin{equation} |x_{N} - x(t_{n}) | \leq Ch^{2} \end{equation}

motivation

we take a half step in front of our original point using its slope, and compute the slope there.

Improved Euler Method

This is also called “Heun’s Method

\begin{equation} x_{i+1} = x_{i} + \frac{1}{2} h(f(t_{i}, x_{i}) + f(t_{i}+h, x_{i}+hf(t_{i}, x_{i}))) \end{equation}

error

\begin{equation} |x_{N} - x(t_{n}) | \leq Ch^{2} \end{equation}

motivation

we average the slopes of the current location and a full step in front, calculating their slopes, and average them

Runge-Kutta Method

a.k.a. instead of contending with the forward, backward, middle slope, or native slope from \(f\), we just ball and average all of them:

\begin{equation} \begin{cases} m_1 = f(t_{i}, x_{i}) \\ m_2 = f\qty(t_{i} + \frac{h}{2}, x_{i}+\frac{h}{2}m_{1}) \\ m_3 = f\qty(t_{i}+\frac{h}{2}, x_{i}+\frac{h}{2}m_{2}) \\ m_4 = f\qty(t_{i} + h, x_{i}+hm_{3}) \end{cases} \end{equation}

and then:

\begin{equation} x_{i+1} = x_{i} + \frac{1}{6}h m_{1} + \frac{1}{3} h m_{2} + \frac{1}{3} h m_{3} + \frac{1}{6} h m_{4} \end{equation}

the coefficients are that from pascal’s triangle.

error

\begin{equation} |x_{N} - x(t_{n}) | \leq Ch^{4} \end{equation}

motivation

this is essentially like “fitting a parabola” against our curve