Bayes Nets as Causal Graphical Models

This notebook accompanies Lecture 17. The exercises should be used to gain facility with computing interventional distributions and generally playing around with encoding joint distributions as Bayes Nets.

To require the fewest dependencies, this notebook encode distributions in a bare-bones way. You should feel free to use matrix or object representations as you see fit.

Our first task will be to encode the conditional probability distributions for the graph

$B$        $E$
  $\searrow~\swarrow$
     $A$
  $\swarrow~\searrow$
$F$        $L$

We can then answer the usual queries, e.g., $P(F=1\mid A=1)$ by looking up their values.

We can compute arbitrary queries in the usual way. For example, we can compute the marginal probability of F (i.e., $P(F)$) using the factorization of $P(B, E, A, F, L)$.

We can now compute $P(F\mid do(A:=1))$. We could express this in two ways: we can update the probability distribution, setting all instances where A=1 to have probability 1.0, or we could modify our loop. Let's do one, then the other.

Now that we have the interventional distribution, we can compute the effects of setting A:=0 vs. A:=1. Since we may not have observed instances of both A:=0 and A:=1, the way to do this is to compute the expected value of the effect: $\mathbb{E}(F\mid do(A:=0) - F\mid do(A:=1)) = \sum_{f\in\{0, 1\}} f * P(f\mid do(A:=0)) - \sum_{f\in\{0, 1\}} f * P(f\mid do(A:=1))$.