I am working on a very basic univariate linear regression and having a little confusion about sampling from the posterior.
Here is my code:
height, weight = torch.tensor(df['height'].values), torch.tensor(df['weight'].values)
xbar = weight.mean()
#Model: height ~ Normal(alpha + beta * weight, sigma)
def model(weight, height):
alpha = pyro.sample("alpha", dist.Normal(torch.tensor(178.), torch.tensor(20.)))
beta = pyro.sample("beta", dist.Normal(torch.tensor(0.), torch.tensor(10.)))
sigma = pyro.sample("sigma", dist.Uniform(0,50))
mu = pyro.deterministic("mu", alpha + beta * (weight - xbar))
with pyro.plate("plate"):
pyro.sample("height", dist.Normal(mu, sigma), obs = height)
guide = AutoDiagonalNormal(model)
adam_params = {"lr": .05}
svi = SVI(model, guide, optim.Adam(adam_params), Trace_ELBO())
n_steps = 1000
for step in range(n_steps):
svi.step(weight, height)
if step % 100 == 0:
print('.', end='')
After inference, I’d like to sample the parameters. When I take the quantiles, the values make sense.
guide.quantiles([0.25, 0.5, 0.75])
{'alpha': [tensor(154.4493), tensor(154.6422), tensor(154.8350)],
'beta': [tensor(0.8780), tensor(0.9028), tensor(0.9276)],
'sigma': [tensor(4.8655), tensor(5.0128), tensor(5.1640)]}
But when I sample or look at the base dict, the sigma parameter does not match the quantiles and is negative.
guide.get_posterior().sample((3,))
tensor([[154.7828, 0.8699, -2.2352],
[154.9283, 0.9046, -2.2036],
[154.7986, 0.9121, -2.1130]])
guide.get_posterior().base_dist.__dict__
{'loc': Parameter containing:
tensor([154.5699, 0.8959, -2.2106], requires_grad=True),
'scale': tensor([0.2421, 0.0452, 0.0497], grad_fn=<AddBackward0>),
'_batch_shape': torch.Size([3]),
'_event_shape': torch.Size([])}
Why is there this discrepancy? I read in the Bayesian Regression Tutorial that quantiles() “constrains to the site’s support” but I don’t completely understand what this means. Thank you