Posterior sampling after model fitting

Hi everyone,

I only recently started using PyTorch/Pyro and I’m trying to figure out how to do posterior sampling after I fit any Bayesian regression model.

I have a notebook here of the simple Bayesian regression example (Simple Bayesian Regression example using Pyro · GitHub). After running 1500 iterations of the model fit, could someone perhaps help me out with how I can sample from the posterior of my parameters?

I’m planning to first learn the most basic stuff and then add more complexity to my model including Gaussian processes and autoregressive terms etc, so any help at this stage would be very appreciated!


if you use svi, you can sample from the guide, as in the tutorial

Thanks @jpchen! It looks like I need to run guide(None) to evaluate the parameter posteriors, but given that I initiated a RegressionModel class, this is what I’m getting when I run it:

  (linear): Linear(in_features=2, out_features=1, bias=True)

I've even tried something like `svi_posterior =, p=p, w = np.array([1., 0.5]), b = 4.))` and calling `guide` on that object, but still same result.

Is there a way to evaluate the RegressionModel class to give me the actual params?

i think you’re a bit confused. the guide is returning a sample from a distribution over nns. lifted_module is a nn distribution and calling it returns a nn. to evaluate it on data, just run the resulting nn with data (as in the tutorial)

Ah. The example shows how to predict the outcome variable, so I’m guessing there’s no way to actually get the fitted parameter values from the guide?

just print the param store

1 Like
  In [12]:
    # posterior predictive distribution we can get samples from
    trace_pred = TracePredictive(wrapped_model,
    post_pred =, None)

How do I generate samples with a different size? I tried changing “x_data -> x_data[:70,:]” in In [12] which seems to be reasonable to me but returns an error complaining about the shape of the input.

Also, I guess there is a typo in the ‘Linear Regression’ paragraph earlier in the example:
Our input X is a matrix of size N×2 and our output y is a vector of size 2×1.
Input is N x 2, then the output should be N x 1, because the weight matrix, as you point out later, is a 2 x 1 matrix (linear.weight [[-1.90511 -0.18619268]]). (N x 2) times (2 x 1) = (N x 1)

this is a subtle consequence of the TracePosterior class. what happens under the hood is the model gets played against posterior execution trace. what this means in relation to the example is that the posterior object has model traces which have sample sites of size (N, 2) but the input data is size (70, 2). in this case, i’d recommend just running the model to get samples: model(x_data[:70], y_data[:70])

I guess there is a typo in the ‘Linear Regression’ paragraph earlier

good catch! thanks for pointing that out

1 Like