Hi. I have a rather high dimensional and big data Gaussian processes model that I am trying to fit and I keep getting a cuda out of memory error when fitting the model to the data.
Is it possible to split a big GP model among multiple GPU’s?
Hi. I have a rather high dimensional and big data Gaussian processes model that I am trying to fit and I keep getting a cuda out of memory error when fitting the model to the data.
Is it possible to split a big GP model among multiple GPU’s?
depending on what you’re doing you might try gpytorch for this:
we haven’t much multi-gpu support built into pyro (it would in general require a bit of hacking)