Probability of Failure
Last edited: August 8, 2025\begin{align} p_{\text{fail}} &= \mathbb{E}_{\tau \sim p\qty(\cdot)} \qty [1 \qty{\tau \not \in \psi}] \\ &= \int 1 \qty {\tau \not\in \psi} p\qty(\tau) \dd{\tau } \end{align}
that is, the Probability of Failure is just the normalizing constant of the Failure Distribution. Like with Failure Distribution itself, computing this is quite intractable. We have a few methods to solve this, namely:
- direct estimation: directly approximate your failure probability from nominal distribution \(p\) — \(\tau_{j} \sim p\qty(\cdot)\), \(\hat{p}_{\text{fail}} = \frac{1}{m} \sum_{i=1}^{m} 1\qty {\tau_{i} \not \in \psi}\)
- Importance Sampling: design a distribution to probe failure, namely proposal distribution \(q\), and then reweight by how different it is from \(p\) — \(\tau_{j} \sim q\qty(\cdot)\), \(\hat{p}_{\text{fail}} = \frac{1}{m}\sum_{i=1}^{m} w_{i} \mathbb{1} \qty {\tau_{i}\not \in \psi}\), call \(w_{i} = \frac{p\qty(\tau_{i})}{q\qty(\tau_{i})}\) (the “importance weight”)
- adaptive importance sampling
- multiple importance sampling
- sequential monte-carlo
How do you pick a proposal distribution? See proposal distribution.
probablistic models
Last edited: August 8, 2025multinomial distribution
A probability distribution to model specific outcomes like a binomial distribution but for multiple variables.
like binomial distribution, we have to assume independence and same probability per trial.
“what’s the probability that you get some set of assignments xj=nj”:
\begin{equation} P(X_1=c_1, X_2=c_2, \dots, X_{m}=c_{m}) = {n \choose c_1, c_2, \dots, c_{m} } p_{1}^{c_1} \cdot \dots \cdot p_{m}^{c_{m}} \end{equation}
where the big choose is a multinomial coefficient, and \(n\) is the number of different outcomes, and \(p_{j}\) is the probably of the $j$th outcome.
problem with gravity
Last edited: August 8, 2025gravity sucks.
general relativity claims that our best theory of how gravity work does not work with non-
process control block
Last edited: August 8, 2025Each process is controlled by a struct which contain information about the process.
- memory used by the process
- file descriptor table
- thread state
- other accounting
file descriptor table
Within each process, we have a file descriptor table (and the ints we get are indicies into this table), for which each entry stores points to the open file table.
When a process forks, the child doesn’t get more open file entries, instead, we simply clone the file descriptor table (i.e. parent and child will share the same underlying open file table entries); this is how we can share pipes.
Product of Linear Maps
Last edited: August 8, 2025Take two linear maps \(T \in \mathcal{L}(U,V)\) and \(S \in \mathcal{L}(V,W)\), then \(ST \in \mathcal{L}(U,W)\) is defined by:
\begin{equation} (ST)(u) = S(Tu) \end{equation}
Indeed the “product” of Linear Maps is just function composition. Of course, \(ST\) is defined only when \(T\) maps to something in the domain of \(S\).
The following there properties hold on linear-map products (note that commutativity isn’t one of them!):
associativity
\begin{equation} (T_1T_2)T_3 = T_1(T_2T_3) \end{equation}
identity
\begin{equation} TI = IT = T \end{equation}