Cholesky decomposition during GPRegression model optimization

During the optimization step for GPRegression models it doesn’t seem like the jitter value is used anywhere.

I’m trying to use the model to do bayesian optimization and the cholesky decomposition keeps failing during SVI due to small negative values in the kernel matrix. I kept trying to increase the jitter but It did not help, eventually when I looked at the source, jitter value was not used in the model() function.

I was wondering if this was intended, and if so maybe I’m setting some parameters wrong? As of right now I created a new kernel that does add some jitter to the kernel matrix to get around the issue.

K = super(SafeMatern52, self).forward(X, Z, diag)
if not diag and Z is None:
    K = K + torch.eye(K.shape[0], K.shape[1], dtype=torch.float64)*(self.eps)
return K

@Yasasa In GPR, noise will play the role of jitter. We set the constraint noise > jitter. If you get Cholesky decomposition issue, I suggest increasing jitter or running your model in float64.

1 Like

@fehiepsi Maybe a stupid question but how do I make sure that the entire model runs in float64? Is it enough that all the tensors that I pass to the model are dtype=troch.float64? Or do I need to do additional changes?

Hi @milost, I think that you can use torch.set_default_tensor_type(torch.DoubleTensor). Otherwise, the error message will tell you which variables you need to cast to float64.

1 Like