Hi All,
Testing with the tutorial on VAEs, i changed the generating distribution of the pixel values to gaussians.
def model(self, x):
# register PyTorch module `decoder` with Pyro
pyro.module("decoder", self.decoder)
with pyro.iarange("data", x.size(0)):
# setup hyperparameters for prior p(z)
z_loc = x.new_zeros(torch.Size((x.size(0), self.z_dim)))
z_scale = x.new_ones(torch.Size((x.size(0), self.z_dim)))
# sample from prior (value will be sampled by guide when computing the ELBO)
z = pyro.sample("latent", dist.Normal(z_loc, z_scale).independent(1))
# decode the latent code z
loc_img = self.decoder.forward(z)
# score against actual images
#here is the change
sigmas = x.new_ones(torch.Size((x.size(0), 784)))*0.1
pyro.sample("obs", dist.Normal(loc_img, sigmas).independent(1), obs=x.reshape(-1, 784))
# return the loc so we can visualize it later
return loc_img
However, when i tray to train the elbo keeps consistengly decreasing towards more negative values.
A result of training would be like this
[epoch 000] average training loss: 935.3164
[epoch 000] average test loss: 62.3496
[epoch 001] average training loss: -158.1064
[epoch 002] average training loss: -398.4471
[epoch 003] average training loss: -506.6157
[epoch 004] average training loss: -573.4166
[epoch 005] average training loss: -618.9186
[epoch 005] average test loss: -647.4464
[epoch 006] average training loss: -652.3466
[epoch 007] average training loss: -677.4514
[epoch 008] average training loss: -696.5506
[epoch 009] average training loss: -711.8633
[epoch 010] average training loss: -724.7451
[epoch 010] average test loss: -735.1249
Notice also the initial positive values.
What could be the reason for this?, is the model not supposed to work with a normal distribution for the pixels?. I understand that assuming bernoulli is actually computed with cross-entropy (reconstruction loss). But it seems to me that a normal distribution is another valid way to represent the pixel distribution.