Predict Latent Variable from Unseen Data after Training

Hi,
I would like to predict the latent variable from an unseen data point using sampling.

My model has the form:

     P(x, z, phi) = P(x|z, phi) P(z) P(phi)

where these distributions are not conjugated. I want to calculate:

    P(z_t|X_t, X_{1:t-1}) = P(z_t, X_t | X_{1:t-1})  / P(X_t | X_{1:t-1})

Is there an easy way to compute this ? Am I missing something? Below I detail some mathematical massaging of the equations in case it helps.

I am going to massage the first term on the right:

    P(z_t, X_t | X_{1:t-1})  = P(X_t | z_t, X_{1:t-1}) * P(z_t| X_{1:t-1}) 
                             = P(X_t | z_t, X_{1:t-1}) * P(z_t)
                             = Int dphi P(X_t | z_t, phi, X_{1:t-1}) * P(phi | X_{1:t-1})   * P(z_t) 

P(X_t | X_{1:t-1}): Posterior predictive distribution
P(phi | X_{1:t-1}) : Posterior distribution.

is the point that you want to compute P(z_t|X_{1:t}) instead of P(z_t|X_{1:T})?

Sorry if the initial post was not clear. We measured T time points X_1… X_T. Instead of doing inference in the entire dataset, we want to do inference and compute the posterior with 1… (T-1) datapoints and leave one out to predict its latent value.

M

if you fit a guide to a model conditioned on X_{1:t-1} and that containts latent variables z_{1:t} then you can use Predictive to approximate the posterior marginal P(z_t | X_{1:t-1})

Hi Martin,
absolutely ! if I use variational inference I can trivially compute z by looking at the variational approximation to the posterior q(z | x) and compute a new value if q is amortized. I was asking, is there an easy way to do it if we are using NUTS to samples from the posterior so the only thing we have are samples but not a functional form of the posterior?

Thanks !