Passing samples from guide into model

I want to be able to parameterise an RNN in my model using samples from my guide. Reading through the docs, it’s not immediately obvious how I would go about doing this.

I want to do something along the lines of:

  1. Parameterise posterior distribution using amortised inference
  2. Sample from posterior distribution
  3. Feed those samples in to an RNN in the model to parameterise a predictive distribution
  4. Factorise the Free Energy over time such that I can calculate an analytic KL for each time step.

Have a look at the Deep Markov Model tutorial, which seems very similar to what you want. If your guide is mean-field, you can use pyro.infer.TraceMeanField_ELBO to compute an ELBO with analytic KL terms.

In the tutorial, I noticed that the analytic KL divergences were not available at that point.

Finally, we should mention that the main difference between the DMM implementation described here and the one used in reference [1] is that they take advantage of the analytic formula for the KL divergence between two gaussian distributions (whereas we rely on Monte Carlo estimates). This leads to lower variance gradient estimates of the ELBO, which makes training a bit easier. We can still train the model without making this analytic substitution, but training probably takes somewhat longer because of the higher variance. Support for analytic KL divergences in Pyro is something we plan to add in the future.

So

  • does the pyro.infer.TraceMeanField_ELBO is one of the analytic KL divergences now?
  • and is this tracemeanfield_elbo suitable for training the deep Markov model? (I think maybe not, because the posterior in this case is p(z_t | z_t-1, x), because it is not only conditional on the observation x, but also the previous state.) What do you think?

Thanks! I had seen that and did think it was what I was looking for initially. I’m still figuring out how Pyro works, so I wasn’t sure that the z_t samples would be passed from guide to model – in hindsight this is obvious!

  • does the pyro.infer.TraceMeanField_ELBO is one of the analytic KL divergences now?

Looking through the source, it will use the analytic KL where it is available (e.g. it is implemented in torch.distributions.kl_divergence).

Hi thank you very much for your reply! What do you think the 2nd question?

I think it’s fine. For the model that I’m currently trying to port to Pyro, I’ve been using something along these lines (albeit explicitly written out in Pytorch) and it’s worked alright for me.

For the model, you can factor over time to get:
p(z_{1:T}) = p(z_1) prod_{t=2}^T p(z_t | z_{t-1})

For the variational distribution you have:
q(z_{1:T} | x_{1:T}) = q(z_1 | x_{1:T}) prod_{t=2}^T q(z_t | z_{t-1}, x_{t:T}).

I think what you will get by using the TraceMeanField_ELBO at time t is then given by E_{z*_{t-1} ~ q(z_{t-1})}[KL(q(z_t | z*_{t-1}, x_{t:T}) || p(z_t | z*_{t-1}))].

I see. From the last equation, I can understand now. I will try to use both the Trace_ELBO and the TraceMeanField_ELBO in my model and to compare the performance.

Thank you very much!

Let me know how it goes!