Quarantine a hidden variable from inference

def model(x,y):
    beta0 = pyro.sample("beta0", dist.Normal(loc=0, scale=10.))
    z_loc, z_cov = f1(x, beta0)
    z = pyro.sample("latent", dist.MultivariateNormal(z_loc,covariance_matrix=z_cov))
    y_loc = f2(x,z)
    sigma = pyro.sample('sigma', dist.Exponential(rate=1/10))

    with pyro.plate("data", N):
        pyro.sample("obs", dist.Normal(y_loc, sigma), obs=y)
nuts_kernel = NUTS(model)
mcmc = MCMC(nuts_kernel, num_samples=3000)
mcmc.run(x, y)

This model aims to seek posterior of beta0 and sigma. In this model, f1 and f2 are known (deterministic) functions.
My question is about z. The “z=pyro.sample()” sentence will assume z has a prior which is a multi Gaussian, and this model will seek posterior of z, too. But in my case, z_loc and z_cov are deterministic given beta0, and I don’t want any inference on z; it should be a forward-pass instead. How can I deal with it?

not entirely clear what you mean but if you want the outputs of f1 to be used “deterministically” you presumably need to compute the integral

y_loc = \int dz f2(x,z) p(z | z_loc, z_cov)

although that’s probably hard unless f2 is simple.

Thanks for the reply. f2 is a neural network. I will need to integrate it numerically.

if z is low dimensional, e.g. 1-4 dimension, you can use gauss-hermite quadrature

if it’s higher dimensional it becomes much harder and you probably just need to explicitly do inference over z, which effectively does the integral via sampling

Unfortunately, the dimension of z is rather high (e.g. over 100). I am trying to by pass the inference over z because the computation time is much longer than I desire. May be I should firstly try to reduce the dimension of z.