Hi!
I’m trying to use a custom, pre-generated prior in my model but it seems that the model training doesn’t sample from the prior at all.
The omega is [200, 1] and they are the parameters I want to fit. When I replace the Empirical distribution with a Normal(0., 1.0) I get ok results and nice distributions for each omega. However when I use the Empirical, each omega have a constant value in different MCMC samples.
Any idea why this is the case? I’m not sure if Empirical is the correct way to do this, but it was the only thing that was even close to what I wanted to do.
Thanks a lot for answering!
predictive = pyro.poutine.block(predictive, hide_all=True)
A = torch.tensor(A, dtype=torch.float32).cuda()
y_data = torch.tensor(y_data, dtype=torch.float32)
f_gpu = torch.tensor(f, dtype=torch.float32)
# Define the model in Pyro
def inverse_model(A, y_data):
n = A.size(1) # Number of features
# Prior distribution for omega, sampled from a tensor shaped [200, 5000]
prior_samples = prior['obs'].T.cuda()
prior_weights = torch.ones(prior_samples.shape).cuda()
omega = pyro.sample('omega', dist.Empirical(prior_samples, prior_weights).expand([n]).to_event(1))
sigma = pyro.sample("sigma", dist.Uniform(0., 0.05)).to(A.device)
# Linear model
mu = A @ omega
with pyro.plate("data", y_data.shape[0]):
pyro.sample("obs", dist.Normal(mu, sigma), obs=y_data)
return omega
inverse_nuts_kernel = pyro.infer.NUTS(inverse_model, init_strategy=pyro.infer.autoguide.init_to_sample())
mcmc = pyro.infer.MCMC(inverse_nuts_kernel, num_samples=10, warmup_steps=10)
with pyro.validation_enabled():
res = mcmc.run(A, y_data)