Splitting Gaussian processes model over multiple GPU's

Hi. I have a rather high dimensional and big data Gaussian processes model that I am trying to fit and I keep getting a cuda out of memory error when fitting the model to the data.

Is it possible to split a big GP model among multiple GPU’s?

1 Like

depending on what you’re doing you might try gpytorch for this:

https://gpytorch.readthedocs.io/en/latest/examples/01_Simple_GP_Regression/Simple_MultiGPU_GP_Regression.html

we haven’t much multi-gpu support built into pyro (it would in general require a bit of hacking)