MCMC linear regression - Very low variance

Hello all,

I have been trying to implement a very simple linear regression based on the pyro tutorials.
When using MCMC to get my parameters posterior distributions, I am getting a strange behaviour with values being sampled remaining the same during the MCMC run. this lead to very low variance.
I was more expecting an output like the one described in this topic : Simple MCMC Linear Regression - #3 by ymp.

The code used is given below.
Can any of you provide me with some guidance?

Thank you in advance.
JPN

import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np

import torch
import torch.nn as nn
import pyro

from pyro.infer import EmpiricalMarginal
from pyro.infer.mcmc import MCMC, NUTS

from pyro.distributions import Normal, Uniform


def generate_data(size = 30, true_intercept = 1, true_slope = 2):
    #
    # generating data
    #
    x = np.linspace(0, 1, size)
    # y = a + b*x
    true_regression_line = true_intercept + true_slope * x
    # add noise
    np.random.seed(453)
    y = true_regression_line + np.random.normal(scale=.5, size=size)

    return x, y, true_regression_line

# Linear regression class build from NN with only one linear level
class RegressionModel(nn.Module):
    def __init__(self, p, q):
        # p = number of features
        # q = number of targets
        super(RegressionModel, self).__init__()
        self.linear = nn.Linear(p, q)

    def forward(self, x):
        return self.linear(x)


def Bayesian_model(x_data, y_data, n_features=1, n_outputs=1):

    w_prior = Uniform(-7.*torch.ones(n_outputs, n_features),
                       7.*torch.ones(n_outputs, n_features))
    b_prior = Uniform(-10.*torch.ones(n_outputs), 10.*torch.ones(n_outputs))

    priors = {'linear.weight': w_prior, 'linear.bias': b_prior}

    scale = pyro.sample("sigma", Uniform(0., 5.))
    # lift module parameters to random variables sampled from the priors
    lifted_module = pyro.random_module("module", regression_model, priors)
    # sample a nn (which also samples w and b)
    lifted_reg_model = lifted_module()
    # run the nn forward on data
    prediction_mean = lifted_reg_model(x_data)

    #with pyro.plate("map"):
    #with pyro.plate("data", len(x_data)):
        # condition on the observed data
    pyro.sample("obs",
                    Normal(prediction_mean, scale),
                    obs=y_data)
    return prediction_mean


if __name__ == '__main__':
    print("Pytorch version=", torch.__version__)
# data generation
    x_data, y_data, true_regression_line = generate_data()
# display data & true regression line
    fig = plt.figure(figsize=(7, 7))
    ax = fig.add_subplot(111, xlabel='x', ylabel='y', title='Generated data and underlying model')
    ax.plot(x_data, y_data, 'x', label='sampled data')
    ax.plot(x_data, true_regression_line, label='true regression line', lw=2.)
    plt.legend(loc=0);
    plt.show()

    x_data = torch.tensor(x_data, dtype=torch.float)
    y_data = torch.tensor(y_data, dtype=torch.float)
    # adding dummy dimension to 1D vector to allow matrix multiplication in nn.Linear
    x_data.unsqueeze_(1)
    y_data.unsqueeze_(1)
    print("\nData size")
    print("x:", x_data.shape)
    print("y:", y_data.shape)

# Model definition
    regression_model = RegressionModel(1, 1)

    nuts_kernel = NUTS(Bayesian_model, adapt_step_size=False)
    hmc_posterior = MCMC(nuts_kernel, num_samples=1000, warmup_steps=200) \
        .run(x_data, y_data)

    marginal_bias = EmpiricalMarginal(hmc_posterior, sites=['module$$$linear.bias'])
    marginal_weight = EmpiricalMarginal(hmc_posterior, sites=['module$$$linear.weight'])
    marginal_sigma = EmpiricalMarginal(hmc_posterior, sites=['sigma'])
    print("\nWeight:")
    print("    Mean=", marginal_weight.mean.item())
    print("Variance=", marginal_weight.variance.item())
    print("\nBias:")
    print("    Mean=", marginal_bias.mean.item())
    print("Variance=", marginal_bias.variance.item())
    print("\nsigma:")
    print("    Mean=", marginal_sigma.mean.item())
    print("Variance=", marginal_sigma.variance.item())

    sns.distplot(marginal_bias._get_samples_and_weights()[0].numpy())
    plt.show()

Check out the bayesian regression tutorial for the model specification and example usage. In particular,

  • you do not need to use poutine.lift or register any parameters using pyro.random_module, since HMC/NUTS does not need to interact with the param store.
  • you have set adapt_step_size to False, and are most like using a step size that results in all samples from the integrator getting rejected, i.e. acceptance_probability = 0. Once you have corrected your model, you should remove this kwarg (by default, this should be set to True).

Many thanks for the quick and very useful answer! :grinning: MCMC is working as expected now.

I was using lift to be able to make a variational inference run in the future and compare results with the ones obtained using a Monte-Carlo approach.

Hi, I also have this problem. I don’t understand your first point “you do not need to use poutine…”. Can you explain it in detail?