Estimate model evidence with HMC

I’d like to estimate the model evidence when my inference algorithm is HMC. With importance sampling, I’m able to do so with this code snippet:

importance = Importance(f, guide=None, num_samples=num_samples)
posterior = importance.run(*args)
log_probs = posterior.log_weights
model_evidence = torch.logsumexp(torch.tensor(log_probs), dim=-1) - torch.log(torch.tensor(num_samples))

I’m unfamiliar with the MCMC api, is there anything similar I can do? Even some way to get the log_prob of a sample trace would be helpful.

Hi Ptkyr. Try using the extra_fields argument in your sampler.run() line. This lets you pull out additional properties of each sample, e.g. the log-likelihood. I have an example of using something similar (though a little more complicated) in a post on my blog. From there, you can estimate the evidence (though with bad numerical convergence, so be careful) from this list of log likelihoods much like you do in your snippet.

Otherwise, try looking at nested sampling in numpyro, which provides evidences much more directly. It has worse “mapping” of the probability contours, but yields much better convergence for the evidence.

1 Like