, and others, but they all seem to have some additional complexity, such as focusing on the posterior predictive distribution of observations, not just the fitted guide distribution of parameters.
Also: once I’ve fitted an svi object for n steps, does it contain only the latest guide parameters, or a record of all the guide parameter values it took over the course of optimization? I’d rather it was the latter but from the docs it’s unclear to me.
Sorry for the noob-like question but I figure if this is confusing to me it’s probably so to others. It would be good to have a simple cookbook answer for this question because I’m pretty sure it’s a common enough use case.
If you code your guide function so that it returns the hidden variables after sampling them, all you would have to do after SVI terminates is to call guide(...). That will generate hidden variables according to the latest value of variational parameter phi.
I’m not sure what the SVI object itself contains, but I think the parameters (for both model and guide) are stored in a global object called the parameter store (http://docs.pyro.ai/en/stable/parameters.html), which means their values are updated at each iteration. This also explains that calling guide(...) after the end of SVI works as I suggested: through the pyro.param(...) statement, your guide looks up what the latest value of the parameter in the store is (or creates the value the first time around) and uses it for simulation.
Does that answer your questions?