Hard time understanding where p(x, z) is modelled in VAE tutorial

Sorry if this is a trivial problem. I am a newbie to Pyro and probabilistic programming.
I came across this code snippet on the Variational Autoencoder tutorial:

# define the model p(x|z)p(z)
def model(self, x):
    # register PyTorch module `decoder` with Pyro
    pyro.module("decoder", self.decoder)
    with pyro.plate("data", x.shape[0]):
        # setup hyperparameters for prior p(z)
        z_loc = x.new_zeros(torch.Size((x.shape[0], self.z_dim)))
        z_scale = x.new_ones(torch.Size((x.shape[0], self.z_dim)))
        # sample from prior (value will be sampled by guide when computing the ELBO)
        z = pyro.sample("latent", dist.Normal(z_loc, z_scale).to_event(1))
        # decode the latent code z
        loc_img = self.decoder.forward(z)
        # score against actual images
        pyro.sample("obs", dist.Bernoulli(loc_img).to_event(1), obs=x.reshape(-1, 784))

I know that this is a generative model of p(x|z) p(z). What I am having a hard time processing is where the p(x|z) p(z) is being defined…

p(x|z)

pyro.sample("obs", dist.Bernoulli(loc_img).to_event(1), obs=x.reshape(-1, 784))

p(z)

z = pyro.sample("latent", dist.Normal(z_loc, z_scale).to_event(1))

i recommend reading the svi tutorials if you haven’t.

1 Like