Same code as GP Regression Tutorial but it does not work

latest version of pyro;
When I copy the code in GP Regression after using this block of code:

optimizer = torch.optim.Adam(gpr.parameters(), lr=0.005)
loss_fn = pyro.infer.Trace_ELBO().differentiable_loss
losses = []
num_steps = 2500 if not smoke_test else 2
for i in range(num_steps):
optimizer.zero_grad()
loss = loss_fn(gpr.model, gpr.guide)
loss.backward()
optimizer.step()
losses.append(loss.item())

then I try to plot the predictive posterior distribution, I get this:


I continue to run the following tutorial code for sparse GP ,it gets the right figure as in tutorial…
I just can’t figure out what’s going wrong…

Hi @shixinxing, I just run the tutorial again and it outputs the expected results. Did you change something in the tutorial code?

1 Like

thanks a lot! I don’t find I missed something though I think there must be something wrong in my code, I will check it more carefully tomorrow. I would like to ask another question if you don’t mind: does pyro use model.guide to approximate the predictive posterior distribution even though the GP posterior can be computed
Exactly? I have designed a customized kernel which has no parameters like lengthscale or variance, can I use the same method in tutorial to figure out the posterior? I tried, it showed that the loss seems to remain constant no matter how many training epoch I set, I don’t know whether this is the normal phenomenon…… by the way what’s the model.guide exactly? Is it a gaussian formed variational distribution? Thanks!

I think this should be fine as long as your kernel is inherited from gp.kernels.Kernel. You can mimic the implementation of Brownian kernel to define your own parameters.

Is it a gaussian formed variational distribution

It depends on how you use autoguide. By default, we use Delta guide (i.e. doing MAP). If you use Normal there, it will use a gaussian variational distribution to approximate the posterior of that parameter.

Hi @fehiepsi, Thank you for your answers. I just check my code and run it in jupyter notebook again, I find that I hardly change the tutorial code problem code.

  • When I restart the jupyter kernel and run all blocks of codes, I get the same results in my question yesterday(GP code 2.ipynb);
  • However, after that, I rerun the specific training code alone:

optimizer = torch.optim.Adam(gpr.parameters(), lr=0.005)
loss_fn = pyro.infer.Trace_ELBO().differentiable_loss
losses = []
num_steps = 2500 if not smoke_test else 2
for i in range(num_steps):
optimizer.zero_grad()
loss = loss_fn(gpr.model, gpr.guide)
loss.backward()
optimizer.step()
losses.append(loss.item())

  • I get the expected results in GP code.ipynb…
    It’s just kind of weired… I don’t know how to explain this… Thanks a lot!