Using the Coregionalize kernel on the GPU

I’ve set up a GP covariance function using the Coregionalize kernel as follows:

        self.age_kernel = gp.kernels.Sum(
            gp.kernels.RBF(input_dim=1, active_dims=[1]),
            gp.kernels.Coregionalize(input_dim=1, active_dims=[0])
        )

I have all of my data on the GPU. However, when I run the model, I see that the Coregionalize model sets up its components without respect to the device, so I assume it defaults to CPU. Hence I get the following error when I fit the model:


RuntimeError                              Traceback (most recent call last)
/usr/local/lib/python3.7/dist-packages/pyro/contrib/gp/kernels/coregionalize.py in forward(self, X, Z, diag)
     80     def forward(self, X, Z=None, diag=False):
     81         X = self._slice_input(X)
---> 82         Xc = X.matmul(self.components)
     83 
     84         if diag:

RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument mat2 in method wrapper_mm)

Is it possible to specify cuda as the device somehow?

I’d recommend setting the global default device

torch.set_default_tensor_type("torch.cuda.FloatTensor")
1 Like