Inference compilation - transfer info from model to guide during inference

I am running an exercise to get my hands on inference compilation. The idea is to have a generative model that generates (and renders) 2D images and then train an inference system based on neural networks to propose the parameters for each posterior distribution of interest. Following the ideas from the original paper, I updated the pyro.infer.CSIS class to be able to access each execution trace of the model in the guide forward functions. Basically the csis.loss_and_grads() function is something like this:

def loss_and_grads(self, grads, batch, step, *args, **kwargs):
    """
    :returns: an estimate of the loss (expectation over p(x, y) of
        -log q(x, y) ) - where p is the model and q is the guide
    :rtype: float

    If a batch is provided, the loss is estimated using these traces
    Otherwise, a fresh batch is generated from the model.

    If grads is True, will also call `backward` on loss.

    `args` and `kwargs` are passed to the model and guide.
    """
    if batch is None:
      batch = (
        self._sample_from_joint(*args, **kwargs)
        for _ in range(self.training_batch_size)
      )
      batch_size = self.training_batch_size
    else:
      batch_size = len(batch)

    loss = 0
    for model_trace in batch:    
      for name, vals in model_trace.nodes.items(): 
          # some code to collect sample operations and initialize new networks as needed
          # this self.guide.latent_variables list will then be accessible during the guide forward
          self.guide.latent_variables.append({"name": name, "vals": vals["value"]})
        
      with poutine.trace(param_only=True) as particle_param_capture:
        guide_trace = self._get_matched_trace(model_trace, *args, **kwargs)    
      particle_loss = self._differentiable_loss_particle(guide_trace)
      particle_loss /= batch_size

      if grads:
        guide_params = set(
          site["value"].unconstrained()
          for site in particle_param_capture.trace.nodes.values()
        )
        guide_grads = torch.autograd.grad(
          particle_loss, guide_params, allow_unused=True
        )
        for guide_grad, guide_param in zip(guide_grads, guide_params):
          if guide_grad is None:
              continue
          guide_param.grad = (
             guide_grad
             if guide_param.grad is None
             else guide_param.grad + guide_grad
          )

      loss += particle_loss
    return loss

As you can see, the list self.guide.latent_variables will be accessible and will hold all the desired information because in CSIS training, the data used is only the generated from the model. I don’t think that the code where this list in the guide is used is useful for the purpose of this question, but basically this list will be used to control all the iterative forward steps that the inference compilation algorithm specifies (for example, we need to access the value correspondent to the previous sample operation to concatenate with the encoded observation at a certain time step, and considering that each model’s execution trace might use different variables from step to step, I need to know exactly which choices are being made). However, when running inference to compute the posterior distribution given new data, the pyro.infer.importance._traces() function runs the guide first, and only then the generative model, which makes sense, but it also means that, for inference, when the guide observes new data the self.guide.latent_variables is empty. Is there any easy workaround for this problem? Let me know if you need more information about this specific use case. Thank you in advance for all possible help!