Dear all,
Suppose we are given a model
\begin{align}p(\theta)\cdot \prod_{i=1}^n p(\mathbf{x}_i|\mathbf{z}_i, \theta)p(\mathbf{z}_i|\theta),\end{align}
where \mathbf{z}_i and \theta are local and global latent variables, respectively.
I am wondering if we can efficiently sample (with MCMC) from the posterior latent p(\theta, \mathbf{z}_i| \mathbf{x}_i) conditioned on a single point \mathbf{x}_i rather than p(\theta, \mathbf{z}_i|\mathbf{x}_{1:n}) conditioned on the whole data \mathbf{x}_{1:n}.
In short, the question is: is there any simple way to obtain m samples \left[\{(\theta_i^{(j)}, \mathbf{z}_i^{(j)})\}_{i=1}^n\right]_{j=1}^m with (\theta_i^{(j)}, \mathbf{z}_i^{(j)})\sim p(\theta, \mathbf{z}_i|\mathbf{x}_{i})?
(i.e., the expected size of the returned sample is m x n x d
with d
being \theta's dimensionality).
Let’s say the observation site is defined by pyro.plate
; If the global latent is absent, then sampling is straightforward, and we don’t need to write explicitly write a for-loop (this is what I meant by “efficiently” above). Would the same be possible even with a global latent (without defining another model sampling a single data point)? If not, what would be the pyro way to implement this?
Thank you for reading the question. I would appreciate any feedback!