"Supporting Ammortized Inference for Normalizing Flows" (Github issue #1911)

I am currently using Pyro to implement normalizing flows as a way to enrich the proposed posterior of a variational autoencoder (for example, Rezende & Mohamed 2015 or Kingma et al. 2016). In the literature of flow-based VAE posteriors and/or priors, the parameters of flow transformations are amortized from the outputs of the encoder network.

As such, what are the best approaches for achieving amortization of flow parameters in Pyro, in the context of VAEs? The following Github issue is quite relevant: Supporting Ammortized Inference for Normalizing Flows · Issue #1911 · pyro-ppl/pyro · GitHub.

For IAF-style amortization via a context variable, perhaps the use of conditional flows could be a solution. In that case, one could add a linear layer which takes encoder outputs to create such a context variable, and use this latter variable to condition the flow transformations. However, other amortization strategies such as the one used in the original Sylvester flows paper (van den Berg et al 2018), the flow parameters are themselves directly amortized via the encoder network, bypassing the need for a context variable. One could implement this behaviour manually, but before doing so, I was wondering whether a similar mechanism was already implemented in Pyro?

Thank you for your help; I very much appreciate Pyro as a tool for incorporating normalizing flows into my research.

Hi @dmannk we now recommend @stefanwebb’s FlowTorch library for normalizing flows. FlowTorch can be used within Pyro or at a lower level using PyTorch optimizers. I am unfamiliar with normalizing flow methods, but you might ask the FlowTorch community.

Thank you @fritzo for your reply, I will ask the FlowTorch community.