Posts

SU-ENGR76 MAY282024

Last edited: August 8, 2025

Convolutional Code

The output sequences of Convolutional Code behaves like a five bit difference in Hamming Distance.

Decoding

  • brute force decoding: precompute, for a sequence length of \(k\), compute \(2^{k}\) sequneces and what they should correspond to in our target code — of course, this is not computationally feasible
  • Virtirbi Algorithm: time complexity — \(4(k+2)\), \(8(k+2)\) 4 possible blocks of 2, and 8 comparisons for Hamming Distance: in general \(k_0\)

in general

  • \(k_0\) of source symbols entering the decoder
  • \(n_0\) of symbols produced by decoder at each step
  • constrain length \(m_0\), how many bits are considered

for the Convolutional Code setup we discussed, we have \(k_0=1\), \(n_0=2\), and \(m_0 = 3\) (one bit produces 2 bits, and we consider 3 bits per step.)

SU-ENGR76 MAY302024

Last edited: August 8, 2025

Consider a mapping between \(X \in \{0,1\}\), and output \(Y \in \{0,1\}\), with a chance of error of probability \(p\). So we have four outcomes:

XYp
001-p
111-p
01p
10p

Writing this out:

  • \(P(y=1|x=0) = p\)
  • \(P(y=0|x=1) = p\)
  • \(P(y=0|x=0) = 1-p\)
  • \(P(y=1|x=1) = 1-p\)

Recall chain rule:

\begin{equation} P(X,Y|A) = P(X|Y,A) P(Y|A) \end{equation}


Consider some input output pair:

\begin{equation} p(y=10 | x = 00) = p(y_1=1 | x_1=0, x_2=0)p(y_2=0| y_1=1, x_1=0, x_2=0) \end{equation}

SU-MATH109 Problem Set 1

Last edited: August 8, 2025