_index.org

SU-ENG7CE Creative Writing

Last edited: August 8, 2025

SU-ENGR76 APR022024

Last edited: August 8, 2025

Clarke’s Third Law

Sufficiently advance technology is indistinguishable from magic

SU-ENGR76 APR042024

Last edited: August 8, 2025

information

Information is the amount of surprise a message provides.

Shannon (1948): a mathematical theory for communication.

information value

The information value, or entropy, of a information source is its probability-weighted average surprise of all possible outcomes:

\begin{equation} H(X) = \sum_{x \in X}^{} s(P(X=x)) P(X=x) \end{equation}

properties of entropy

  • entropy is positive: \(H(x) \geq 0\)
  • entropy of uniform: for \(M \sim CatUni[1, …, n]\), \(p_{i} = \frac{1}{|M|} = \frac{1}{n}\), and \(H(M) = \log_{2} |M| = \log_{2} n\)
  • entropy is bounded: \(0 \leq H(X) \leq H(M)\) where \(|X| = |M|\) and \(M \sim CatUni[1 … n]\) (“uniform distribution has the highest entropy”); we will reach the upper bound IFF \(X\) is uniformly distributed.

binary entropy function

For some binary outcome \(X \in \{1,2\}\), where \(P(x=1) = p_1\), \(P(X_2 = 2) = 1-p_1\). We can write: