Stabilizing GP kernels for use in MultivariateNormal

I’m trying to build a latent GP that I’m going to use to model temporal variation in a larger model. So, rather than using GPRegression, I’m taking the kernel and using it to generate covariance matrices for a MultivariateNormal, as follows:

# Population trend in metric
ls = pyro.sample('ls', Gamma(torch.FloatTensor([4.]), torch.FloatTensor([1.]))) 
amp = pyro.sample('amp', Exponential(torch.FloatTensor([1.])))
K = gp.kernels.RBF(input_dim=1, variance=amp, lengthscale=ls)
beta = pyro.sample('beta', MultivariateNormal(torch.zeros(W), covariance_matrix=K(torch.FloatTensor(weeks))))

however, when I do this, I almost always get a singular value problem with the Cholesky:

cholesky_cpu: U(5,5) is zero, singular U.

What’s the best way to stabilize this? Should I be adding a tiny diagonal value to the generated matrix, or should I be adding a WhiteNoise kernel to the RBF?

Hi @fonnesbeck, I think float32 is not good for GP problems, you might consider switching to float64. To resolve the Cholesky issue, it is faster and simpler to add a tiny diagonal term, though both approaches should be equivalent.

1 Like