Regarding a fixed transformation as guide to Neutra

Hi,

In my project, I am implementing a new type of sampling, that relies heavily on some transformation being present from a base-distribution (standard normal) to the target distribution. Luckily, a good implementation of this exists in the form of the Neutra reparametrisation, and for a good part of the project this has enabled me to do what I want.

Because this Neutra reparametrisation uses block_autoregressive transformations that need to be learned through SVI, getting some of my results takes longer than necessary. In particular, in the first stage of my results I know the exact description of the target distribution (which is always in the form of a multivariate normal). Instead of learning the block_autoregressive transformations which takes long, I can construct a LowerCholeskyAffine from the base distribution to the target distribution. This is the correct transformation to use, but my problem is as follows.

Given my setup I would like to use the same type of reparametrisation, i.e. the Neutra reparametrisation, because most of my pipeline works with those type of models. That, however, requires a guide of the autoContinous type, AutoNormalizingFlow is one of this. That guide then needs to be trained through SVI.

My idea was to use my constructed transformation, which does not need to be trained, and provide that to AutoNormalizingFlow, as such.


# set up a transformation
transform = LowerCholeskyAffine(
    loc=torch.from_numpy(target_mean),
    scale_tril=torch.from_numpy(transformation_matrix),
)

# Set up the guide
guide = AutoNormalizingFlow(pre_neutra_model, partial(iterated, 1, transform))

The issue is that if I do not train it, none of the required things are properly set in the guide (transform, latent_dim, etc), so Neutra can’t access them and the reparametrisation is not done correctly.


###########
# Use guide to create a neutra transformed model
neutra = NeuTraReparam(guide.requires_grad_(False))
neutra_model = poutine.reparam(pre_neutra_model, config=lambda _: neutra)

neutra_model_trace = poutine.trace(neutra_model).get_trace() # <<< ERROR HAPPENS HERE
for name, node in neutra_model_trace.iter_stochastic_nodes(): 
    print(name, node)

...
File "/home/david/.pyenv/versions/hmc3.9.9/lib/python3.9/site-packages/pyro/infer/reparam/neutra.py", line 72, in apply
    if name not in self.guide.prototype_trace.nodes:
AttributeError: 'NoneType' object has no attribute 'nodes'

But when I do perform an SVI training step, I find issues as such

#############
# Configure optimiser
adam = optim.Adam({"lr": config["learning_rate_map"]})

#############
# Configure optimisation method
svi = SVI(pre_neutra_model, guide, adam, Trace_ELBO())

#############
# train
for training_i in range(2):
    #############
    # Calculate loss
    loss = svi.step() # ERROR HAPPENS HERE

...
  File "/home/david/.pyenv/versions/hmc3.9.9/lib/python3.9/site-packages/pyro/distributions/transforms/lower_cholesky_affine.py", line 51, in _call
    return torch.matmul(self.scale_tril, x.unsqueeze(-1)).squeeze(-1) + self.loc
AttributeError: 'int' object has no attribute 'unsqueeze'

My questions are:

  • Is there a way to construct a guide, and circumvent the SVI training, and in some way set the necessary components in it?
  • Is using a guide actually the way to go? If not, should I consider building a Neutra-like reparametrisation that takes in a transformation instead?

I would appreciate a pointer/advice on how to reparameterise models properly with pre-configured transformations in the same way Neutra does.

Earlier posts related to my project are: