The first value returned by pyro.get_param_store()

I am new to pyro. In Introduction (Part 1), No.12 code block:

``````guide.requires_grad_(False)

for name, value in pyro.get_param_store().items():
print(name, pyro.param(name))
``````

it returns:
AutoDiagonalNormal.loc Parameter containing:
tensor([-2.2371, -1.8097, -0.1691, 0.3791, 9.1823])
AutoDiagonalNormal.scale tensor([0.0551, 0.1142, 0.0387, 0.0769, 0.0702])

No.13 code block:

``````guide.quantiles([0.25, 0.5, 0.75])
``````

it gets:
{‘sigma’: [tensor(0.9328), tensor(0.9647), tensor(0.9976)],
‘linear.weight’: [tensor([[-1.8868, -0.1952, 0.3272]]),
tensor([[-1.8097, -0.1691, 0.3791]]),
tensor([[-1.7327, -0.1429, 0.4309]])],
‘linear.bias’: [tensor([9.1350]), tensor([9.1823]), tensor([9.2297])]}

The problem is:
AutoDiagonalNormal.loc = [-2.2371, -1.8097, -0.1691, 0.3791, 9.1823]
From guide I know -1.8097, -0.1691, 0.3791, 9.1823 represents weight and bias, but what about -2.2371? This doesn’t look like sigma

It’s not immediately obvious but if you look at the docstring of AutoContinuous it says:

This uses `torch.distributions.transforms` to transform each constrained latent variable to an unconstrained space, then concatenate all variables into a single unconstrained latent variable.

So `AutoDiagonalNormal.loc` has unconstrained values but `guide.quantile` returns constrained values. To convince yourself you can get the constrained value by:

``````from torch.distributions import biject_to
fn = dist.Uniform(0., 10.)
biject_to(fn.support)(torch.tensor(-2.2371))
# returns tensor(0.9647)
``````
1 Like