ELBO rewrite to analytically compute KL(q(z|x)||p(z)) in VAE


Hi, thanks for making this very interesting library.

I’ve got one question regarding its use: In many VAE-like models the ELBO is rewritten.
ELBO = E_{z \sim q(z|x)}log p(x|z) - KL(q(z|x)||p(z))
and the KL divergence is computed analytically, which should reduce variance. See the paper (eq 7) or the implementation.

Is there any way in which one can implement this in Pyro?



no, not currently. we’ve intended to include such a capability but it’s a bit tricky to make it work in sufficient generality and in happy conjunction with other estimator tricks. in many cases sampled KLs work fine though. for example sampled KLs seem to be no impediment to learning the deep markov model: http://pyro.ai/examples/dmm.html