Dear @fehiepsi,
thank you again for your reply!
As you suggested I proceeded by:
posterior_predictive = Predictive(model=model, guide=variational_density, num_samples=num_mc_samples,
return_sites=['_RETURN']) # Get model output using draws from the variational density
predictive_logits = torch.mean(posterior_predictive(data)['_RETURN'], dim=0)
log_likelihood = dist.Categorical(logits=predictive_logits).log_prob(target).sum() / len(target)
For the SoftMax-Regression Model over MNIST I get a -log_likelihood
of about 1.3 and for the same model over CIFAR10 -log_likelihood
of about 29.4. Thus, the neg. log-likelihood of observing the data under this simple model drastically increases for more complicated data, which is what one would expect.
Regards and thank you again for your help!