SU-ENGR76 MAY092024
Last edited: August 8, 2025digital encoding
We allocating different systems in the same environment different frequency bands; by doing this, we are able to communicate pack information more effectively to prevent interference.
“how do we take a sequence of bits 10100…. and map it to a continuous-time signal \(X(t)\) such that the spectrum of this system is limited to \([0, B]\)”?
sinc digital encoding
IDEA: recall sinc sampling theorem, which (even if under sampled), will recover the source points exactly. As such, we can write:
SU-ENGR76 MAY142024
Last edited: August 8, 2025High Frequency Signal
Frequency content of a signal would be symmetric around a target frequency \(f_{c}\), and most of the energy will go from \(f_{c} \pm \frac{1}{T}\). Strictly speaking, you maybe leaking some energy outside the band.
energy of signal
\begin{equation} \varepsilon_{1} = \frac{1}{T} \int_{0}^{T} y^{2}(t) \dd{t} \end{equation}
if \(\varepsilon_{s} > \varepsilon_{T}\), then decode \(1\). Otherwise, \(\epsilon_{1} < \epsilon_{T}\), decode \(0\).
Hamming Distance
the Hamming Distance between two sequences is the number of positions in which these two sequences differ from each other
SU-ENGR76 MAY162024
Last edited: August 8, 2025an is a collection of binary strings such that the minimum between any two codes is some distance \(d\).
This code allows you to correct up to:
- \(t = \left\lfloor \frac{d_{c}-1}{2}\right\rfloor\) one bit errors
- and can detect up to \(d-1\) errors (otherwise we can’t get up to a minimum size codeword
minimum length
what is the largest \(M\) we can have for a code with each codeword of length \(L\) and minimum inter-codeword \(d_{c} = 3\).
SU-ENGR76 MAY232024
Last edited: August 8, 2025Convolutional Code
A Convolutional Code, at each time \(j\), takes bit \(b_{j}\) and output two bits \(p_{2j-1}\), \(p_{2j}\), by using \(b_{j}\) and the previous two its \(b_{j-1}\) and \(b_{j-2}\).
Working Example
Consider that if you have \(k\) bits to communicate to the receiver:
\begin{equation} b_1, b_2, \dots, b_{k} \end{equation}
Codewords/Output sequence:
\begin{equation} p_1, p_{2}, \dots \end{equation}
Let we have some sequence of \(k\) input bits
\begin{equation} j = 1, \dots, k \end{equation}
SU-ENGR76 MAY282024
Last edited: August 8, 2025Convolutional Code
The output sequences of Convolutional Code behaves like a five bit difference in Hamming Distance.
Decoding
- brute force decoding: precompute, for a sequence length of \(k\), compute \(2^{k}\) sequneces and what they should correspond to in our target code — of course, this is not computationally feasible
- Virtirbi Algorithm: time complexity — \(4(k+2)\), \(8(k+2)\) 4 possible blocks of 2, and 8 comparisons for Hamming Distance: in general \(k_0\)
in general
- \(k_0\) of source symbols entering the decoder
- \(n_0\) of symbols produced by decoder at each step
- constrain length \(m_0\), how many bits are considered
for the Convolutional Code setup we discussed, we have \(k_0=1\), \(n_0=2\), and \(m_0 = 3\) (one bit produces 2 bits, and we consider 3 bits per step.)
