Assume we are training a Bayesian Linear Regression Model. After inference, I’d like to study the uncertainty in my coefficients. Using a `AutoNormal`

guide, I can inspect my means and variances using

```
for name, value in pyro.get_param_store().items():
print(name, pyro.param(name).data)
```

Instead of extracting the values of `AutoNormal.locs.a`

and `AutoNormal.scales.a`

and putting them into a Normal distribution, it seems to be common to use pyro’s `Predictive`

class:

```
predictive = Predictive(model=model, guide=auto_guide, num_samples=1000)
with torch.no_grad():
samples = predictive(X, y)
```

which goal is to compute the posterior predictive distribution (PPD), but it needs to sample from the posterior in the integral, so it can be used as a “hack” to get the posterior samples easily.

After executing, `samples`

contains 1000 samples for each coefficient. If I compute the mean and variance of all samples, I get values very close to the values stored in `pyro.get_param_store()`

from above.

**Issue**: However, when I do **not** pass a guide or provide the `posterior_samples`

argument to `Predictive`

, the `samples`

variable still contains values, but they look very different.

```
predictive = Predictive(model=model, num_samples=1000)
with torch.no_grad():
samples = predictive(X, y)
```

It does not make any sense to me that the `Predictive`

class can compute `p(x_new|X)`

without having access to the guide or samples from `p(z|X)`

. Clearly the stored posterior samples are wrong as well. Is this an unitended bug?