How to find the devices attached to all the variables used by pyro

While running SVI for a model, i encountered the following error

raceback (most recent call last):
  File "gdrive/My Drive/causal_scene_generation/vae_svi/vae.py", line 419, in <module>
    model, optimizer = main(args)
  File "gdrive/My Drive/causal_scene_generation/vae_svi/vae.py", line 362, in main
    epoch_loss += svi.step(x,y, actor, reactor, actor_type, reactor_type)
  File "/usr/local/lib/python3.6/dist-packages/pyro/infer/svi.py", line 128, in step
    loss = self.loss_and_grads(self.model, self.guide, *args, **kwargs)
  File "/usr/local/lib/python3.6/dist-packages/pyro/infer/trace_elbo.py", line 126, in loss_and_grads
    for model_trace, guide_trace in self._get_traces(model, guide, args, kwargs):
  File "/usr/local/lib/python3.6/dist-packages/pyro/infer/elbo.py", line 170, in _get_traces
    yield self._get_trace(model, guide, args, kwargs)
  File "/usr/local/lib/python3.6/dist-packages/pyro/infer/trace_elbo.py", line 53, in _get_trace
    "flat", self.max_plate_nesting, model, guide, args, kwargs)
  File "/usr/local/lib/python3.6/dist-packages/pyro/infer/enum.py", line 55, in get_importance_trace
    model_trace.compute_log_prob()
  File "/usr/local/lib/python3.6/dist-packages/pyro/poutine/trace_struct.py", line 216, in compute_log_prob
    log_p = site["fn"].log_prob(site["value"], *site["args"], **site["kwargs"])
  File "/usr/local/lib/python3.6/dist-packages/pyro/distributions/torch.py", line 52, in log_prob
    return super().log_prob(value)
  File "/usr/local/lib/python3.6/dist-packages/torch/distributions/categorical.py", line 115, in log_prob
    return log_pmf.gather(-1, value).squeeze(-1)
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!

Although, i verified that all the variables have been put on cuda by <var_name>.cuda() .
Should i do something else to set the variable to the device ? How do i check which variable is set on cpu and which is on cuda?

@eb8680_2 @jpchen @martinjankowiak

One of log_pmf and value is still on the CPU. You can use a debugger to check the device of each tensor (visible in their .device attributes), track down the offending tensor to its source, and add the appropriate .to(device=...)/.cuda() call there.

I looked at all the variables that were created in model and guide. All of them were set to cuda . Finally, solved it by the following code.

torch.set_default_tensor_type('torch.cuda.FloatTensor')

Is there any disadvantage by doing this ? (Apart from not having any control over the choice of device )