- What tutorial are you running?
I am working on the VAE example.
- What version of Pyro are you using?
I use Pyro 1.8.0
- Please link or paste relevant code, and steps to reproduce.
I am trying to visualize the prior p(z) in model. I added
plt.figure() plt.hist(z.detach().cpu().numpy().flatten(),bins=100) plt.show()
z = pyro.sample("latent", dist.Normal(z_loc, z_scale).to_event(1)) . It looks like a standard gaussian distribution, which makes sense to me.
I then updated the guide by multiple z_loc and z_scale with 100 as follows:
def guide(self, x): # register PyTorch module `encoder` with Pyro pyro.module("encoder", self.encoder) with pyro.plate("data", x.shape): # use the encoder to get the parameters used to define q(z|x) z_loc, z_scale = self.encoder.forward(x) z_loc =z_loc*100 z_scale =z_scale*100 # sample the latent code z pyro.sample("latent", dist.Normal(z_loc, z_scale).to_event(1))
After this modification, the distribution of p(z) is no longer a standard normal distribution.
I think p(z) in the model as a prior will always be a standard normal distribution. Can anyone explain why the update in the guide alters the prior p(z)?