Posts

SU-CS161 Embedded Ethics

Last edited: October 10, 2025

Abstraction and Idealization

  • abstraction: omit details of real world situation
  • idealization: change aspects of real world situation

risks of inclusion and exclusion

  • inclusion: you will have lots of information collected about you, privacy concerns, etc.
  • exclusion: you will have less voice heard, you needs may not be measured / accounted for, actions may be taken to make you more legible

perpetuating cycle

  1. broad simplification
  2. failed solution translation to be incorporated
  3. downstream injusticet'

incommesurability

lacking a common measure of value (what is “more than” / “less than” / “better than”) etc.

SU-CS161 OCT232025

Last edited: October 10, 2025

Key Sequence

Notation

New Concepts

Important Results / Claims

Questions

Interesting Factoids

AdaBoost

Last edited: October 10, 2025
  1. initialize weights \(\alpha_{i} = \frac{1}{N}\)
  2. for \(t \in 1 … T\)
    • learn this-round classifier \(f_{t}\qty(x)\) on data weights \(\alpha_{i}\)
    • recompute classifier coefficient \(\hat{w}_{t}\)
    • recompute weights \(\alpha_{i}\)
    • normalize weights \(\alpha_{i} = \frac{\alpha_{i}}{\sum_{j=1}^{N} \alpha_{j}}\)
  3. final model predicts via \(\hat{y} = \text{sign}\qty(\sum_{t=1}^{T} \hat{w}_{t}f_{t}\qty(x))\) for classifier with \(T\) rounds

After some iterations. This algorithm only works on binary classification and eventually gets exponential loss / 0/1 loss. gradient boosting will eventually allow a more general form.

boosting

Last edited: October 10, 2025

boosting allows combining very simple models to boost results. One good algoritm is AdaBoost.

additional information

example

for instance, we can ensemble four decision trees together. You can weight the prediction of each of the trees, and then add them up, and then run some decision boundary over them (i.e. for instance if the output is boolean, you can multiply the boolean as \(\pm 1\) multiplied by a weight)

high level ideas

  1. add more features (i.e. extract more, increase model complexity)
  2. add more weak learners together

direct adde

Last edited: October 10, 2025