What latent variables are packed in pyro.param() in the Bayesian Regression tutorial?

I am going through the tutorial Bayesian Regression

In cell [10]:

for name, value in pyro.get_param_store().items():
    print(name, pyro.param(name))

Output:
auto_loc tensor([-2.2026, 0.2936, -1.8873, -0.1607, 9.1753], requires_grad=True)
auto_scale tensor([0.2285, 0.0954, 0.1376, 0.0600, 0.1042], grad_fn=)

The tutorial goes on saying “Note that Autoguide packs the latent variables into a tensor, in this case, one entry per variable sampled in our model.”

Could someone explain which the 5 latent variables are? Thanks.

everything in a pyro.sample() statement, so in order: linear.weight (2), linear.bias (1), factor (1), and sigma (1). the reason why w_prior, b_prior, and f_prior don’t have pyro.sample() is because random_module does that under the hood.

I don’t understand the order that @jpchen specified. The order looks consistent with the following code in the tutorial, ‘guide.quantiles’. For example, bias parameter is around value of 9, and the last number in pyro.param is also around 9.

Given guide.quantiles([0.25, 0.5, 0.75])

# {'sigma': 
#   tensor([0.9137, 0.9476, 0.9827]),
#  'linear.weight': 
#   tensor([[[-1.9587, -0.1856,  0.2750]],
#          [[-1.8759, -0.1589,  0.3326]],
#          [[-1.7931, -0.1321,  0.3902]]]),
#  'linear.bias': 
#   tensor([[9.1461],
#          [9.1928], 
#          [9.2395]])}

However, it’s inconvenient. Is there any way to see the each latent variable name for point estimates in pyro.param()?