Debugging model regarding Effect Handlers

A few posts back I was asking for help to implement what I called a “Stochastic Bayesian Network”. I came up with the following:

Passing Z, B and Y data (A is latent) gives:

Running MCMC returns reasonable parameters (p[*]) distributions. The thing I’m not confident with is the Predictive samples when using effect handlers, it seems to me that handlers information propagates “downstream” the graph only, e.g.

  • Predictive(model, samples)(rng_key, **kwargs)['sample[Z]'] # samples is a dict of 'p[*]' from mcmc.get_samples()
  • Predictive(do(model, {'sample[B]': jnp.array([0])}), samples)(rng_key, **kwargs)['sample[Z]']
  • Predictive(condition(model, {'sample[B]': jnp.array([0])}), samples)(rng_key, **kwargs)['sample[Z]']

yields the same samples, whereas I’d expect that conditioning on B would shift Z's distribution somehow and thus differ from the first two, which should be equal to each other. Replacing Z for Y (descendant of B instead of an ancestor) result in different samples for each query.

Is it clear what I’m struggling with? I’d be happy to share more details if what I provided doesn’t suffice to diagnose any problems. Thanks in advance.

i don’t know if i understand your question but if you only do inference once and obtain a single samples then you’ve done inference precisely once for one specific model. condition does not go in and do inference; it fixes a latent variable to be a particular value (makes it observed).

That’s what I’m trying to do. I’ll try to explain a bit further, I might be missing something:

Running MCMC passing data from Z, B and Y return samples for p[Z], p[B], p[Y] and also p[A] and sample[A] (data from A is sampled since it’s a latent variable). Ultimately, what I’m interested in p[Z], p[B], p[Y] and p[A], that’s what I store in samples (note that sample[A] I do not).

Then I’ll try to make inference. What I expect is:

  • Predictive(model, samples)(rng_key, **kwargs) to return samples from sample[Z], sample[A], sample[B] and sample[Y], that is, the sampled data conditioned on the parameters which are stored in samples

  • Predictive(do(model, {'sample[B]': jnp.array([0])}), samples)(rng_key, **kwargs) also to return samples from sample[Z], sample[A], sample[B] and sample[Y], but this time for Y matters (B's only descendant), sample[B] is equal to 0 regardless of the actual sample[B]

and

  • Predictive(condition(model, {'sample[B]': jnp.array([0])}), samples)(rng_key, **kwargs), again, to return samples from sample[Z], sample[A], sample[B] and sample[Y], but now for sample[B] to be an array of zeros (which it is), and the distribution of the nodes dependent of B to change somehow, that is, conditioned on the "sampled value of B" (sample[B]), which is fixed to 0, shouldn’t it’s ancestor distribution differ from the unconditional query? That change is what I’m not seeing.

can you please create a minimal working example to illustrate precisely what you mean? for example i don’t know what “Running MCMC passing data from…” means. words can be confusing if each person has their own private language. code is unambiguous.

I see, I’m sorry!

I was wondering at first if the rendered models could denounce something. Here is the example.