I want to run pyro on two GPU's. The context is the following:
- I have a deterministic torch model
- SInce I have to GPU's i use model = torch.nn.DataParallel(model)
- I am putting priors on the weights of the model using the standard framework and lifting the DataParallel object
When I execute, it gives me the error:
"Broadcast function not implemented for CPU tensors"
I would appreciate any tips or help!