Hi!
I am a bit confused about the enumeration specs.
import pyro
import pyro.distributions as dist
import torch
from pyro.infer import config_enumerate
from pyro.infer import infer_discrete
@config_enumerate
def model(x_pa_obs=None, x_ch_obs=None, y_obs=None):
p = x_pa_obs
y = pyro.sample('y_pre', dist.Binomial(probs=p),
infer={"enumerate": "sequential"},
obs=y_obs)
d_ch = dist.Normal(y, 1.0)
x_ch_pre = pyro.sample('x_ch_pre', d_ch, obs=x_ch_obs)
return y
data_obs = {'x_pa_obs': torch.tensor(0.5), 'x_ch_obs': torch.tensor(1.0)}
model_discrete = infer_discrete(model, first_available_dim=-1, temperature=1)
y_posts = []
for ii in range(10**4):
print(f'iteration {ii}', end='\r')
y_posts.append(model_discrete(**data_obs))
smpl = torch.stack(y_posts)
print(f"mean: {smpl.mean()}")
When I use parallel
as enumeration strategy, the expected probability p(y| x_ch, x_pa) is inferred (0.625). However, if I use sequential
inference returns wrong results (0.503). Is there a qualitative difference between the two methods, or is parallel
only faster? Is there a problem with my code?
In your experience – how accurate is discrete inference with pyro? How would you suggest to go about inferring said probability?
Best,
Gunnar