I im trying out different methods to estimate my latent variables. The variables should be correlated and hence I want to use an AutoNormalGuide to get the means as well as the variance and co-variance. Now since the usage of AutoNormal guide seems to hides what parameter is being estimated (because all parameters are contained in auto_loc, auto_scale) I do not know which parameter is represented by a certain index. As an example:
PS using the diagonal guide for now to just get started without too much comp effort
In the above model, at which index of auto_loc, auto_scale do I find the est mean and scale of noise_std for example.
A second question I have is how the parameter for ‘raw_data’, which is sampled from a distribution dependent on alpha and beta, would be represented in the guide for this model. I want to find the estimated locs as well as the scales of the entries contained in the raw_data tensor. Since raw_data is a sample itself I don’t know how to represent its parameter this in the guide.
Hi @abelstam, if you want to inspect AutoDiagonalNormal, you could try the .median() or .quantiles() methods which return dicts keyed by sample site name. Alternatively, you can draw samples by calling guide(); the return value is a dict of samples keyed on site name.
Note that AutoDiagonalNormal does not expose the underlying loc and scale parameters because it actually learns loc and scale in unconstrained space, and those unconstrained parameters don’t correspond to the constrained values in a clean way (I think of the bijection as an implementation detail). However @patrickeganfoley is working on a new class AutoNormal that exposes per-site variables like AutoDelta; if you have usage requests you might comment on that PR.
Thanks for the clarification! I did not find the details about the unconstrained learning of parameters anywhere in the documentation (please excuse me if I have not looked properly), but I think it is very interesting. I think it may be nice to clarify this more in the examples provided/ documentation? Is this something pyro specific or a general approach of most ppl? Maybe the usage of pyro.get_param_store should be discouraged in some situations, such that it is clear that it is not a one-to-one representation of your constrained parameter space? I think it would be very clarifying to have one clear-cut way of reading/interacting with the ‘physical’ parameters in general: pyro.get_constrained_parameters()? I think it can be a little confusing to access your parameters in different ways depending on the model/guide you are using.
As an addition to the methods .quantiles() and median() it would be nice to have .mean() and .scale()/.variance()/.std(), if the above is not possible… But these suggestions will probably be better suited in the issue you posted. Will take a look at that one now.