Gradient Can't Be Computed with Variable in For Loop

Hi, I’m trying to implement an autoregressive time series model by creating an initial tensor of zeros called motion and then filling in the value of each element in the tensor using a for loop. But I’m getting an error when training that says: ‘one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor []], which is output 0 of AsStrideBackward0…’

I think this error has something to do with filling the value of the tensor motion in a for loop, but I’m not sure if there’s a better way to do the computation that avoids the error? I know that for loops and other arbitrary Python control flows are possible in probabilistic models. Is there a good way to implement these computations without the gradient error? The relevant code in the model is below.

  # - Local trend component.
  # Prior for time-global scale.
  drift_scale = pyro.sample('drift_scale', dist.HalfNormal(25).expand([data_dim]).to_event(1))
  with pyro.plate('time', len(data), dim=-1):
    drift = pyro.sample('drift', dist.Normal(0, drift_scale).to_event(1))

  # Initialize motion tensor.
  motion = torch.zeros(len(drift) + 1)
  # Autoregressive coefficient.
  rho = pyro.sample('rho', dist.Uniform(-1, 1))
  for i in range(len(drift)):
    motion[i+1] = motion[i] * rho * drift[i]
  motion = motion[1:].unsqueeze(-1)  # drop initial state and add data dimenion of size 1

there’s usually no particularly compelling reason to use in-place ops. instead it’s better to append to a list and use torch.stack and torch.cat and such. or define prev_motion and keep updating it. see e.g. here.

Ah ok, so don’t modify tensors in place. I went ahead and just appended the values to a list and converted the list to a tensor, and that worked without any issues. Thanks!