Houjun Liu

SU-ENGR76 MAY022024

nyquist sampling theorem

Formally, the nyquist limit is states as:

for \(X(t)\) a continuous-time signal with bounded frequency representation which is bounded by \([0, B]\) Hz; if \(X\) is sampled every \(T\) seconds, then if \(T < \frac{1}{2B}\) (sampling interval is smaller than \(1/2B\)) or equivalently \(\frac{1}{T} > 2B\) (sampling frequency is larger than \(2B\)), then \(X\) can be reconstructed from its samples \(X(0), X(T), X(2T), \ldots\).

At every time, we can go back and fourth between \(X\) the samples and sinusoids via:

\begin{align} X(t) &= b_0 + \sum_{j=0}^{BT} a_{j} \sin \qty(2\pi \frac{j}{T}t) + b_{j} \cos \qty(2\pi \frac{j}{T}t) \\ &= A_{0} + \sum_{j=1}^{BT} A_{j} \sin \qty(2\pi \frac{j}{T} t + \phi_{j}) \end{align}

We use the second representation (in particular with \(A_{j} = \sqrt{a_{j}^{2} + b_{j}^{2}}\) because its easy to actually visualize and recover.

Passband Signal

What if our signal, instead of being a Baseband Signal (\(f \in [0,B]\)), what if we have a Passband Signal meaning \(f \in [f_{\min} > 0, f_{\max}]\)?

We actually still only need \(2(f_{\max} - f_{\min})\) worth of samples.

Its the same nyquist limit argument due to degrees of freedom.

Interpolation

nyquist limit is great and all, but I really don’t want to wait for all \(T\) to be able to sample all the necessary terms to solve for every \(a_{j},b_{j}\) before we can reconstruct our signal.

So, even if we got our sequence of \(\frac{1}{2B}\) length of points, we need an alternative way to reconstruct the signal as we go.

One way to reconstruction via interpolation is just to connect the dots; however, this is bad because it creates sharp corners.

In General

Suppose you have a sampling period length \(T_{s}\):

\begin{equation} \hat{x}(t) = \sum_{m=0}^{\infty} X\qty(mT_{s}) F\qty( \frac{t-mT_{s}}{T_{s}}) = x(0) F \qty(\frac{t}{T_{s}}) + x(T_{s}) f\qty(\frac{t-T_{s}}{T_{s}}) + \dots \end{equation}

where \(F(t)\) is some interpolation function such that:

\begin{equation} \begin{cases} F(0) = 1 \\ F(k) = 0, k \in \mathbb{Z} \backslash \{0\} \end{cases} \end{equation}

Notice the above is a convolution between \(X\) and \(F\), where \(y\) is fixed as a multiple \(m\) around \(mT_{s}\) and the convolution is centered at \(\frac{t}{T_{s}}\).

However, because we are finite valued, we just slide a window around and skip around.

Consider now \(\hat{x}\) at \(kT_{s}\)

\begin{align} \hat{x}(kT_{s}) &= \sum_{m=0}^{\infty} X(mT_{s}) F \qty(\frac{kT_{s}- mT_{s}}{T_{s}}) \\ &= \sum_{m=0}^{\infty} X(mT_{s}) F \qty(k-m) \end{align}

now, recall that \(F\) is \(0\) for all non-zero integers, so each term will only be preserved once, precisely at \(m = k\). Meaning:

\begin{align} \hat{x}(kT_{s}) &= \sum_{m=0}^{\infty} X(mT_{s}) F \qty(k-m) \\ &= X(kT_{s}) 1 \\ &= X(kT_{s}) \end{align}

so this is why we need \(F(k) = 0, k \in \mathbb{Z} \backslash \{0\}\)

Zero-Hold Interpolation

Choose \(F\) such that:

\begin{equation} F = \begin{cases} 1, \text{if}\ |x| < \frac{1}{2} \\ 0 \end{cases} \end{equation}

Infinite-Degree Polynomial Interpolation

\begin{equation} F(t) = (1-t) (1+t) \qty(1- \frac{t}{2}) \qty(1+ \frac{t}{2}) \dots = \text{sinc}(t) = \frac{\sin(\pi t)}{\pi t} \end{equation}

This is the BEST interpolation; this is because it will be stretched such that every zero crossing matches eat \(mT_{s}\), meaning we will recover a sum of sinusoids.

This gives a smooth signal; and if sampling was done correctly with the nyquist limit, interpolating with sinc interpolation will give you your original signal.

Shannon’s Nyquist Theorem

Let \(X\) be a Finite-Bandwidth Signal where \([0, B]\) Hz.

if:

\begin{equation} \hat{X}(t) = \sum_{m=0}^{\infty} X(mTs) \text{sinc} \qty( \frac{t-mTs}{Ts}) \end{equation}

where:

\begin{equation} \text{sinc}(t) = \frac{\sin \qty(\pi t)}{\pi t} \end{equation}

  • if \(Ts < \frac{1}{2B}\), that is, \(fs > 2B\), then \(\hat{X}(t) = X(t)\) (this is a STRICT inequality!)
  • otherwise, if \(Ts > \frac{1}{2B}\), then \(\hat{X}(t) \neq X(t)\), yet \(\hat{X}(mTs) = X(mTs)\), and \(\hat{X}\) will be bandwidth limited to \([0, \frac{fs}{2}]\).

This second case is callled “aliasing”, or “strocoscopic effect”.


Alternate way of presenting the same info:

\begin{equation} \hat{X}(t) = \sum_{m=0}^{\infty} X(mTs) \text{sinc} \qty( \frac{t-mT_{s}}{T_{s}}) \end{equation}

Let \(X(t)\), as before, be a continuous-time, bandwidth limited, signal with Bandwidth \(B\); let \(\hat{X}(t)\) be the reconstruction of this signal with samples taken apart by \(T_{s} < \frac{1}{2B}\); then \(\hat{X}(t) = X(t)\). Otherwise, if \(T_{s} > \frac{1}{2B}\), then the reconstruction \(\hat{X}(t) \neq X(t)\), but the samples at \(mT_{s}\) will still match (that is, \(X(m T_{s}) = \hat{X}(m T_{s})\)) and \(\hat{X}(t)\) will be a Baseband Signal whose spectrum is limited by \([0, \frac{\frac{1}{T_{s}}}{2}] = [0, \frac{F_{s}}{2}]\). This second case is callled “aliasing”, or “strocoscopic effect”.