Houjun Liu

fairness

fairness through unawareness

procedural fairness, or fairness through unawareness is a fairness system

If you have no idea about the demographics of protected groups, you will make better decisions.

  1. exclude sensitive features from datasets
  2. exclude proxies of protected groups

Problem: deeply correlated information (such as stuff that people like) is hard to get rid of—individual features does nothing with respect to predicting gender, but taken in groups it can recover protected group information.

fairness through awareness

we only care about the outcome

fairness through parity

that the prediction for different groups

\begin{equation} P(G=1|D=0) = P(G=1|D=1) \end{equation}

fairness through calibration

We want the CORRECTNESS of the algorithm to be similar between protected groups.

disparate impact

\begin{equation} \frac{P(G=G^{*}|D=0)}{P(G=G^{*}|D=1)} \leq \epsilon \end{equation}

where, by US law, disparate impact states \(\epsilon\) must be 0.2 or smaller for protected groups \(D\).

where \(G^{*}\) is the correct prediction.