Based on the Linear Tutorial, How to Get Gradients When Passing Back the Optimal Parameters

I have basic linear model as follows:

def model(is_cont_africa,ruggedness,log_gdp=None):
a=pyro.param(‘a’,lambda:torch.randn(()))
b_a=pyro.param(‘b_a’,lambda:torch.randn(()))
b_r=pyro.param(‘b_r’,lambda:torch.randn(()))
b_ar=pyro.param(‘b_ar’,lambda:torch.randn(()))
sigma=pyro.param(‘sigma’,lambda:torch.ones(()))
is_cont_africa,ruggedness,log_gdp
mean=a+b_ais_cont_africa+b_rruggedness+b_aris_cont_africaruggedness

with pyro.plate('data'):
    return pyro.sample('obs',dists.Normal(mean,sigma),obs=log_gdp)

pyro.render_model(model,model_args=(is_cont_africa,ruggedness,log_gdp),render_params=True)

I already used SVI to get the optimal params via MLE. When I want pass them back to the model, then directly using loss=model(), loss.backward() to get the gradients of these parameters, I failed.

Errors like this.
loss=model(is_cont_africa,ruggedness,log_gdp)
loss=loss.sum()

loss.backward()

RuntimeError Traceback (most recent call last)
Cell In[189], line 4
1 loss=model(is_cont_africa,ruggedness,log_gdp)
2 loss=loss.sum()
----> 4 loss.backward()

File /opt/conda/lib/python3.10/site-packages/torch/_tensor.py:492, in Tensor.backward(self, gradient, retain_graph, create_graph, inputs)
482 if has_torch_function_unary(self):
483 return handle_torch_function(
484 Tensor.backward,
485 (self,),
(…)
490 inputs=inputs,
491 )
→ 492 torch.autograd.backward(
493 self, gradient, retain_graph, create_graph, inputs=inputs
494 )

File /opt/conda/lib/python3.10/site-packages/torch/autograd/init.py:251, in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables, inputs)
246 retain_graph = create_graph
248 # The reason we repeat the same comment below is that
249 # some Python versions print out the first line of a multi-line function
250 # calls in the traceback and some print out the last line
→ 251 Variable.execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
252 tensors,
253 grad_tensors
,
254 retain_graph,
255 create_graph,
256 inputs,
257 allow_unreachable=True,
258 accumulate_grad=True,
259 )

RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn

Thanks!