Inverse AutoRegressive Flow in model for parallel sampling?


InverseAutoregressiveFlow class in pyro says:

Note that this implementation is only meant to be used in settings where the inverse of the Bijector is never explicitly computed (rather the result is cached from the forward call). In the context of variational inference, this means that the InverseAutoregressiveFlow should only be used in the guide, i.e. in the variational distribution. In other contexts the inverse could in principle be computed but this would be a (potentially) costly computation that scales with the dimension of the input (and in any case support for this is not included in this implementation)

According to my understanding of the parallel wavenet paper, IAF is used to transform a simple logistic distribution to a multivariate distribution so that all the samples of the observed sequence can be generated in parallel , instead of generating one sample at a time. Wouldn’t we need to have IAF in the model if we want to have a fast parallel sampling mechanism for a time series signal? Maybe my understanding is wrong. If not, how can i go about implementing IAF like transformation in my model?

Any suggestions would be helpful.


it might be helpful to take a look here

e.g. section 3.2