Getting estimates of parameters that use PyroSample

I have hopefully an easy question for the forum: I have a PyroModule model that includes some parameters that use PyroSample in the initialization to specify their priors and others that are defined using pyro.sample() in the forward method. Its easy to get posterior estimates from the latter, but its not clear to me how I get parameters that are specified by PyroSample. Do I need to use poutine to trace them manually?

I have tried accessing the parameter directly as an attribute of the model, but that seems to just sample from the prior (rather than from the posterior of the fitted model).

I think if you specify a PyroSample variable as:

self.x = PyroSample(lambda self: dist.Normal(self.loc, self.scale))

then you’ll get a sample given the optimized self.loc, self.scale. But I’m not so sure how you used PyroSample in your class.

Yeah, that’s how I have it set up:

        self.ls = pyro.nn.PyroSample(
            lambda self: Uniform(
                torch.DoubleTensor([100.0]).to(device),
                torch.DoubleTensor([500.]).to(device),
            )
        )

        self.amp = pyro.nn.PyroSample(
            lambda self: Gamma(
                torch.DoubleTensor([2.0]).to(device),
                torch.DoubleTensor([0.5]).to(device),
            )
        )

        self.K = gp.kernels.RBF(input_dim=1, variance=self.amp, lengthscale=self.ls)

but the optimized values for ls and amp do not show up in the quantiles, and sampling from them just draws from the priors. Do I need to be using Predictive here?

It seems that there are no parameters to learn for self.ls and self.amp. I assumed that you want to learn some parameters like loc and scale, then you can do

self.loc = PyroParam(...)
self.scale = PyroParam(...)
self.x = PyroSample(lambda self: dist.Normal(self.loc, self.scale))

If you are using custom guide or autoguide, you can get posterior samples by running your guide. If you want to use samples from the guide and replay them in the model (to get e.g. posterior predictive), you can use Predictive. Otherwise, if you run the model, it only gives you a sample obtained from the “optimized” model (i.e. the model with its optimizer parameters - those parameters can be different from the parameters from the guide).

I was interested in learning the posterior distributions of self.ls and self.amp; Their hyperparameters are fixed, right?

What is odd is that other parameters defined with PyroSample show up in my quantiles, but the GP parameters do not. For example, I have:

        self.s_mu = pyro.nn.PyroSample(
            lambda self: HalfCauchy(torch.DoubleTensor([1.0]).to(device))
        )

        self.ls = pyro.nn.PyroSample(
            lambda self: Uniform(
                torch.DoubleTensor([100.0]).to(device),
                torch.DoubleTensor([500.]).to(device),
            )
        )

        self.amp = pyro.nn.PyroSample(
            lambda self: Gamma(
                torch.DoubleTensor([2.0]).to(device),
                torch.DoubleTensor([0.5]).to(device),
            )
        )

yet when I generate quantiles from the fitted model, ls and amp are not there:

estimates = guide.quantiles([0.05, 0.5, 0.95])
estimates.keys()
dict_keys(['s_mu', 'mu', 'beta', 'alpha', 'gamma', 'psi', 'sigma'])

I think I may have made things unnecessarily complicated by porting the model to a PyroModule.

I suspect that you didn’t run your GP model in your model. For Pyro Module, the common pattern is

class A(PyroModule):
    def __init__(...):
        # define PyroParam
        # (optionally) define PyroSample, e.g. self.foo = PyroSample...

    def any_method(...):
        # do_something_with_param if needed
        # draw a sample from `foo` by accessing the attribute, e.g. y = self.foo
        # if `self.foo` is accessed multiple times, using @pyro_method decorator
        # for any_method (this can be skipped for `forward` method)

Could you paste the code in your model that involves GP?

I’m calling the model() method inside the model. Here are the relevant bits, I think:

class ParkBiasModel(pyro.nn.PyroModule):

    def __init__(self, ...):

        super().__init__()

        ...

        self.ls = pyro.nn.PyroSample(
            lambda self: Uniform(
                torch.DoubleTensor([100.0]).to(device),
                torch.DoubleTensor([500.]).to(device),
            )
        )

        self.amp = pyro.nn.PyroSample(
            lambda self: Gamma(
                torch.DoubleTensor([2.0]).to(device),
                torch.DoubleTensor([0.5]).to(device),
            )
        )

        self.K = gp.kernels.RBF(input_dim=1, variance=self.amp, lengthscale=self.ls)

        self.beta = gp.models.GPRegression(
            self.game_days, None, self.K, noise=torch.tensor(0.).to(device)
        )

        ...

    def forward(self, day_index, ...):

        ...

        f, f_var = self.beta.model()
        beta = pyro.sample("beta", Normal(f, torch.sqrt(f_var)).to_event())

        ...

I’m not so sure but I’ll do it this way:

K = gp.kernels.RBF(input_dim=1)
self.beta = gp.models.GPRegression(game_days, None, K, ...)
self.beta.kernel.lengthscale = ...
self.beta.kernel.variance = ...

or

K = gp.kernels.RBF(input_dim=1)
K.lengthscale = ...
K.variance = ...
self.beta = gp.models.GPRegression(game_days, None, K, ...)

Do PyTorch nn.Module allow us to do something like?

def __init__(self, ...):
    self.A = some_module
    self.B = new_module(self.A)

I feel that there will be some name conflicts here: will we access parameters of that module through A.foo or B.A.foo? I’m not so sure but it is better to avoid such conflicts.