Houjun Liu

SU-ENGR76 APR302024

Discrete Fourier Transform

The matrix operation is computationally intractable as it scales with \(O(N^{2})\). The complexity can be reduced via a Fast-Fourier Transform with \(O(n\log n)\) time.

We can compute the Fourier representation forward and backwards by inverting the Fourier matrix

Source Coding Review

Basic Source

We can just do Huffman Coding directly.

Continuous Real Source

We can quantize the continuous source, and then do Huffman Coding.

Continuous-Time Source

Few strategies to get discrete symbols.

  1. sampling: to get discrete points
  2. quantization: to turn int continuous source symbols to discrete symbol-set
  3. compression: Huffman Coding

Lossless Sampling

sampling is the task of obtaining discrete time samples of a continuous time signals.

Suppose we have a finite-period signal \(T\). We know that we can extend it into a \(T\) periodic function model-able by an at-least infinite Fourier Series by an repeated extension.

Moreover, assume that the spectrum of the signal can be represented by a finite sequence Fourier Series—essentially, we assume that our signal is a Finite-Bandwidth Signal, and moreover the \(f_{\min} = 0\), and \(f_{\max} = B < \infty\).

By making the assumption above, we know that our resulting Fourier series has a frequency bounded by \(\frac{j}{T} \leq B \implies j \leq BT\), meaning, this gives:

\begin{align} &f(x) = b_0 + \sum_{k=1}^{\infty} \qty( a_{k} \cos(k \omega x) + b_{k} \sin(k \omega x)) \\ \Rightarrow\ & f(x) = b_0 + \sum_{k=1}^{BT} \qty( a_{k} \cos(k \omega x) + b_{k} \sin(k \omega x)) \end{align}

Now, let us consider what would happen if we tried to sample this signal every \(S\) second:

at \(x=0\)

\begin{equation} y_0 = b_0 + \sum_{j=1}^{BT} a_{j} \sin 0 + b_{j} \cos 0 = b_0 + b_1 + \dots + b_{BT} \end{equation}

at \(x=S\)

\begin{equation} y_{S} = b_0 + \sum_{j=1}^{BT} a_{j} \sin \qty(2 \pi \frac{j}{T} S ) + b_{j} \cos \qty(2\pi \frac{j}{T} S) \end{equation}

you will note that we have \(2BT + 1\) unknowns (\(b_0, b_1, …, b_{BT}, a_{1}…, a_{BT}\)). This means that we need to at least make \(2BT+1\) samples. This means that we need to choose our \(S\) such that:

\begin{equation} \frac{T}{S} \geq 2BT + 1 \implies S \leq \frac{T}{2BT+1} \approx \frac{T}{2BT} = \frac{1}{2B} \end{equation}

meaning we can reconstruct our whole function as long as our sampling is at least double the Bandwidth of our signal. This is the nyquist limit.

We state this more formally in nyquist sampling theorem