Hi all,
I am new to probabilistic programming and Bayesian inference, so I apologize in case that question is trivial or lacks clarity. I searched the forum for similar questions but couldn’t find an answer.
Let us consider a problem where we want to sample from P(Y|X,D)
where X
is the input, D
is the training dataset, and Y
is the expected output. Also, let us assume that we are using a Bayesian Neural Network (BNN) as the model, and a guide whose weights I will refer to as theta
here. After inference with SVI
, my understanding is that Predictive(model, guide, num_samples,...)
will approximate the following integral by averaging num_samples
samples of theta
:
.
Now, if I’m not mistaken, the variance of P(theta|D)
can be considered as the epistemic uncertainty, and the aleatoric uncertainty is encoded in the distribution P(Y|X,theta)
.
However, Predictive
returns samples from P(Y|X,D)
, and approximating the variance of those predictions (e.g. via sample variance) means that I lose the ability to distinguish between epistemic and aleatoric uncertainty.
Is there any way to recover he uncertainty from P(theta|D)
, which (I think) is the uncertainty of the guide? Thank you very much for your help.
Edit: After playing around, it seems that pyro.param
can be used for retrieving what I’m looking for. For example, if the guide is AutoNormal
, then pyro.param("AutoDiagonalNormal.scale")
seems to provide what I need. However I am unsure, so confirmation of this still would be very much appreciated.