Using TraceEnum_ELBO + empty guide function means maximise likelihood learning?

For example, for an HMM model, the latent discrete variable is set as “infer={enumerate: parallel}”.

If I use
TraceEnum_ELBO with an empty guide function, does that means I am actually maximising the likelihood by optimising the trainable parameters in the model?

Correct, if all of your model variables are enumerated and your guide is empty, then TraceEnum_ELBO is maximizing the exact log-likelihood of data (i.e. exactly marginalizing out all latent variables).


Thank you very much for your confirmation. That is very helpful!

Because I am thinking to pre-train my model part which involve many neural network parameters by

  • Setting all the latent variables have observations (these observation are noisy, imperfect but I think they are still useful somehow for pre-training the model. Therefore, the neural network parameters in the model will have a good start point to be trained in the following unsupervised learning). This step will trained using a empty guide (i.e. the method mentioned in the post).
  • after the pre-training, I will let the model part totally unsupervised, i.e. remove the noisy observations.

Can you please clarify what you mean by empty guide?
My understanding is that such a thing does not exists.
Do you mean that you are using an AutoDelta guide Automatic Guide Generation — Pyro documentation to compute the MAP estimate of your parameters?

if all your latents in your model are completely marginalized out, then you can just use an empty guide:

def guide(data):