Could you please explain in brief the forward pass in VAE?
Is it like the following steps?

Guide gets input:X and produces: z latent dimension

Sample from z latent

Model samples from prior ZN(0,1) in order to check KL divergence between approximate posterior Q(zX) and prior P(z).

Model decodes the z latent (Guide output) , then we check the reconstruction.

backprop after that
Last thing…
Reparameterization trick is taken care internally in SVI, right?
Thanks!
define the guide (i.e. variational distribution) q(zx)
def guide(self, x):
# register PyTorch module `encoder` with Pyro
pyro.module("encoder", self.encoder)
with pyro.plate("data", x.shape[0]):
# use the encoder to get the parameters used to define q(zx)
z_loc, z_scale = self.encoder.forward(x)
# sample the latent code z
pyro.sample("latent", dist.Normal(z_loc, z_scale).to_event(1))
define the model p(xz)p(z)
def model(self, x):
# register PyTorch module `decoder` with Pyro
pyro.module("decoder", self.decoder)
with pyro.plate("data", x.shape[0]):
# setup hyperparameters for prior p(z)
z_loc = x.new_zeros(torch.Size((x.shape[0], self.z_dim)))
z_scale = x.new_ones(torch.Size((x.shape[0], self.z_dim)))
# sample from prior (value will be sampled by guide when computing the ELBO)
z = pyro.sample("latent", dist.Normal(z_loc, z_scale).to_event(1))
# decode the latent code z
loc_img = self.decoder.forward(z)
# score against actual images
pyro.sample("obs", dist.Bernoulli(loc_img).to_event(1), obs=x.reshape(1, 784))