I looked at another post on the forum: Getting the meaning behind pyro.sample with obs - #2 by fehiepsi

Now, I can obtain the loglikelihood of the model.

```
import pyro
def model2(x, y=None, theta_0_val=None, theta_1_val=None):
pyro.clear_param_store()
if theta_0_val is not None:
theta_0 = pyro.param("theta_0", torch.tensor(theta_0_val))
else:
theta_0 = pyro.param("theta_0", torch.randn(1))
if theta_1_val is not None:
theta_1 = pyro.param("theta_1", torch.tensor(theta_1_val))
else:
theta_1 = pyro.param("theta_1", torch.randn(1))
with pyro.plate("data", len(x)):
return pyro.sample(
"obs", pyro.distributions.Normal(x * theta_1 + theta_0, 1.0), obs=y
)
def ll(x, y, theta_0, theta_1):
trace = pyro.poutine.trace(model2).get_trace(x, y, theta_0, theta_1)
return trace.log_prob_sum()
```

I implemented the same function using PyTorch distributions.

```
def ll_torch(x, y, theta_0, theta_1):
d = torch.distributions.Normal(loc = theta_0 + theta_1*x, scale=1.)
return d.log_prob(y).sum()
```

And confirmed that I got the same answer from both implementations.

```
ll_torch(x, y, 2., 2.)
```

gives output

```
tensor(-2161.8367)
```

```
ll(x, y, 2., 2.)
```

gives output:

```
tensor(-2161.8367, grad_fn=<AddBackward0>)
```

###
Questions: I have the following questions now:

- Is there a better way to compute the loglikelihood over what I have done?
- Let us say I want to obtain MLE from scratch in Pyro (without setting a guide and then optimising ELBO). Conceptually, this is trivial and requires to use the optimise the log likelihood. In regular PyTorch, this would be as follows:

```
t0 = torch.tensor(0., requires_grad = True)
t1 = torch.tensor(0., requires_grad = True)
optim = torch.optim.Adam([t0, t1], lr = 0.1)
for i in range(100):
loss = -ll_torch(x, y, t0, t1)
loss.backward()
optim.step()
optim.zero_grad()
```

For Pyro, do I need to clear the param store each time I update the parameters? Is there a way to `update`

the values in the param store?