[Beginners] Toy examples/tutorials for exact inference on discrete variables?

Hi guys, this is my first time to learn ppl and my question may sound stupid. Much appreciation for any pointers or suggestions!

Throughout the official tutorial, I can’t find simple examples of exact inference on a graph with discrete variables (many have suggested HMM or GMM examples, but you need to know the model and many things like Poutine, AutoDelta to understand the code).

I do find some good discussions here, like

But I still don’t know how to write my own example (actually I try to modify them but I always fail with errors like “ValueError: Number of einsum subscripts must be equal to the number of operands.” ). For example, I failed to write a simple model including 4 binary variables x1, x2, x3, and z = x1 & x2 & x3 (logical and) and obtain the exact conditional probability P(z|x1=1) (I know this example doesn’t utilize observed samples and deep learning but I just want to know how I can do that).

I am also writing notes with “stupid” examples, and hope that it could help someone with little knowledge about ppl like me.

Hi @dreamerlzl, I am not sure if my answer is helpful when you have to use some utilities to compute such conditional probability. The benefit is it applies to more general models.

import torch
import pyro
import pyro.distributions as dist

def model(z=None, p1=0.3, p2=0.8, p3=0.4):
    x1 = pyro.sample('x1', dist.Bernoulli(p1))
    x2 = pyro.sample('x2', dist.Bernoulli(p2))
    pyro.sample('z', dist.Bernoulli((x1.bool() & x2.bool()) * p3), obs=z)

p_z_x1 = pyro.do(model, data={'x1': torch.tensor(1.)})
p_z_x1_enum = pyro.poutine.enum(p_z_x1, first_available_dim=-1)
trace = pyro.poutine.trace(p_z_x1_enum).get_trace(z=torch.tensor(0.))
log_prob_evaluate = pyro.infer.mcmc.util.TraceEinsumEvaluator(
    trace, has_enumerable_sites=True, max_plate_nesting=1)
print("p(z=0|x1=1):", log_prob_evaluate.log_prob(trace).exp())
trace1 = pyro.poutine.trace(p_z_x1_enum).get_trace(z=torch.tensor(1.))
print("p(z=1|x1=1):", log_prob_evaluate.log_prob(trace1).exp())

which outputs

p(z=0|x1=1): tensor(0.6800)
p(z=1|x1=1): tensor(0.3200)

Here are some notes that might make the implementation clearer:

  • I used pyro.do to compute conditional probability p(z|x1=1). If you want to compute joint probability p(z,x1=1), you just replace it with pyro.condition.
  • config_enumerate is used to say that we want to enumerate the discrete sites, instead of drawing a single sample from those sites. Alternatively, you can add a keyword infer={'enumerate': 'parallel'} to each of those sites (this is more flexible because you can control which sites you want to enumerate). We wrap the model by poutine.enum to actually do such enumerate job (otherwise, those infer keywords will be ignored).
  • TraceEinsumEvaluator is used to compute the joint log_prob of a model trace. Alternatively, I think you can use TraceEnum_ELBO as in this answer

Please let me know if there is anything not clear to you.

1 Like

Pyro Dev @fehiepsi, thank you very much for the great example!
I know that my weak background in ppl makes me awkward during the learning of pyro, but your explanation helps a lot! I think I should first go to learn some basic ppl theory.

At your convenience, could you also give the example code to use TraceEnum_ELBO here? The interface looks pretty different and I fail to make my modification work here. Thank you.

Sure, here is TraceEnum_ELBO version (assuming you know that ELBO loss is negative of log_prob of the model with empty guide):

def guide(**kwargs):

elbo = pyro.infer.TraceEnum_ELBO(max_plate_nesting=0)
p_z_x1 = pyro.do(model, data={'x1': torch.tensor(1.)})
print("p(z=0|x1=1):", math.exp(-elbo.loss(p_z_x1, guide, z=torch.tensor(0.))))
print("p(z=0|x1=1):", math.exp(-elbo.loss(p_z_x1, guide, z=torch.tensor(1.))))

Again, thank you very much! This looks much more understandable for me.

This toy model might be helpful too.

1 Like

Yes it helps a lot! Thank you very much!

Thanks for a very useful discussion.
Apologizes for a basic question, went thru doc and forum but with no success.

Suppose I want to sample from a model conditioned on a z variable:
c = pyro.poutine(model, data={'z':torch.tensor(1.0))

Things like:

[c for _ in range(10)]
Predictive(c, {}, num_samples=10, parallel=True, return_sites=[‘x1’,‘x2’])()

seems to do not provide correct values for x1 or x2.

What is the best way to do a parallel sampling from a model and from the conditioned model when working with exact inference cases ?