Hi, I’m trying to implement an autoregressive time series model by creating an initial tensor of zeros called `motion`

and then filling in the value of each element in the tensor using a `for loop`

. But I’m getting an error when training that says: ‘one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor []], which is output 0 of AsStrideBackward0…’

I think this error has something to do with filling the value of the tensor `motion`

in a for loop, but I’m not sure if there’s a better way to do the computation that avoids the error? I know that for loops and other arbitrary Python control flows are possible in probabilistic models. Is there a good way to implement these computations without the gradient error? The relevant code in the model is below.

```
# - Local trend component.
# Prior for time-global scale.
drift_scale = pyro.sample('drift_scale', dist.HalfNormal(25).expand([data_dim]).to_event(1))
with pyro.plate('time', len(data), dim=-1):
drift = pyro.sample('drift', dist.Normal(0, drift_scale).to_event(1))
# Initialize motion tensor.
motion = torch.zeros(len(drift) + 1)
# Autoregressive coefficient.
rho = pyro.sample('rho', dist.Uniform(-1, 1))
for i in range(len(drift)):
motion[i+1] = motion[i] * rho * drift[i]
motion = motion[1:].unsqueeze(-1) # drop initial state and add data dimenion of size 1
```