SU-ENGR76 MAY232024
Last edited: August 8, 2025Convolutional Code
A Convolutional Code, at each time \(j\), takes bit \(b_{j}\) and output two bits \(p_{2j-1}\), \(p_{2j}\), by using \(b_{j}\) and the previous two its \(b_{j-1}\) and \(b_{j-2}\).
Working Example
Consider that if you have \(k\) bits to communicate to the receiver:
\begin{equation} b_1, b_2, \dots, b_{k} \end{equation}
Codewords/Output sequence:
\begin{equation} p_1, p_{2}, \dots \end{equation}
Let we have some sequence of \(k\) input bits
\begin{equation} j = 1, \dots, k \end{equation}
SU-ENGR76 MAY282024
Last edited: August 8, 2025Convolutional Code
The output sequences of Convolutional Code behaves like a five bit difference in Hamming Distance.
Decoding
- brute force decoding: precompute, for a sequence length of \(k\), compute \(2^{k}\) sequneces and what they should correspond to in our target code — of course, this is not computationally feasible
- Virtirbi Algorithm: time complexity — \(4(k+2)\), \(8(k+2)\) 4 possible blocks of 2, and 8 comparisons for Hamming Distance: in general \(k_0\)
in general
- \(k_0\) of source symbols entering the decoder
- \(n_0\) of symbols produced by decoder at each step
- constrain length \(m_0\), how many bits are considered
for the Convolutional Code setup we discussed, we have \(k_0=1\), \(n_0=2\), and \(m_0 = 3\) (one bit produces 2 bits, and we consider 3 bits per step.)
SU-ENGR76 MAY302024
Last edited: August 8, 2025Consider a mapping between \(X \in \{0,1\}\), and output \(Y \in \{0,1\}\), with a chance of error of probability \(p\). So we have four outcomes:
X | Y | p |
---|---|---|
0 | 0 | 1-p |
1 | 1 | 1-p |
0 | 1 | p |
1 | 0 | p |
Writing this out:
- \(P(y=1|x=0) = p\)
- \(P(y=0|x=1) = p\)
- \(P(y=0|x=0) = 1-p\)
- \(P(y=1|x=1) = 1-p\)
Recall chain rule:
\begin{equation} P(X,Y|A) = P(X|Y,A) P(Y|A) \end{equation}
Consider some input output pair:
\begin{equation} p(y=10 | x = 00) = p(y_1=1 | x_1=0, x_2=0)p(y_2=0| y_1=1, x_1=0, x_2=0) \end{equation}
SU-ENGR76 Unit 1 Index
Last edited: August 8, 2025information
- information source can be modeled as a random variable
- surprise
- information value and entropy
Source Coding
- non-singular code
- uniquely decodeable
- Prefix-Free code, which is uniquely decodeable and instantaneously decodable
- Average Codeword Length
- generating Prefix-Free code with Huffman Coding
- Block Coding
- Boundedness of Block Coding and why it saves space!
- Diadic Source
- Shannon’s Source-Coding Theorem
- Non-IID Sequence Can Have Smaller Entropy (which is why we use Entropy Rate of the Source as the measure of how good a code is)
signal
SU-ENGR76 Unit 2 Index
Last edited: August 8, 2025Communication System Design!