A few posts back I was asking for help to implement what I called a “Stochastic Bayesian Network”. I came up with the following:

Passing `Z`

, `B`

and `Y`

data (`A`

is latent) gives:

Running MCMC returns reasonable parameters (`p[*]`

) distributions. The thing I’m not confident with is the `Predictive`

samples when using effect handlers, it seems to me that handlers information propagates “downstream” the graph only, e.g.

`Predictive(model, samples)(rng_key, **kwargs)['sample[Z]'] # samples is a dict of 'p[*]' from mcmc.get_samples()`

`Predictive(do(model, {'sample[B]': jnp.array([0])}), samples)(rng_key, **kwargs)['sample[Z]']`

`Predictive(condition(model, {'sample[B]': jnp.array([0])}), samples)(rng_key, **kwargs)['sample[Z]']`

yields the same samples, whereas I’d expect that conditioning on `B`

would shift `Z`

's distribution somehow and thus differ from the first two, which should be equal to each other. Replacing `Z`

for `Y`

(descendant of `B`

instead of an ancestor) result in different samples for each query.

Is it clear what I’m struggling with? I’d be happy to share more details if what I provided doesn’t suffice to diagnose any problems. Thanks in advance.