Interpret noise parameter of GPRegression


I use Pyro’s GPRegression model to approximate a function. When I print model.named_parameters(), there is a parameter called noise, which I understand as the difference between the ground truth and the observation. The value of this parameter is initially 0 (if I don’t set a prior), but is updated to a real scalar value (pos or neg) after training the model. I thought epsilon was iid N(0, sigma).

My question is: How can I interpret the noise value? If it is negative, do I underestimate the ground truth at every single point? Or is it the expected value of epsilon?


Hi @stbeceti, parameters need to be optimized in unconstrained space (i.e. in real domain). To access the actual value of the noise, you can look at this section in GP tutorial.

Hello, @fehiepsi
thank you very much for the answer. When I print model.noise, the corresponding value is always zero (if I have not set a prior) and thus no noise is assumed. I guess that the value model.noise_unconstrained is not taken into account during modeling and optimization and therefore has no meaning if I do not set a prior, is that correct?

Hmm, I think noise should be 1 by default. Could you provide some reproducible code?

You are right, noise is 1 by default and gets updated after training. Seems like I had a very special case. I can’t reproduce the zero noise.

But is my assumption correct about noise_unconstrained?

Optimizer will optimize noise_unconstrained. model.noise will return noise_unconstrained.exp() for you. (note that noise is a positive parameter, and optimizers do not work directly with “positive” parameter - it needs to work with “log” of that parameter - noise_unconstrained is log of noise if we are using exp transform here - noise_unconstrained is softplus_inverse of noise if we are using softplus transform here - I think in GP, we use exp transform by default for positive parameters).

1 Like

Thank you very much!