When will trace_ELBO() become negative


I’m new to this and I was using pyro.infer.SVI with pyro.infer.Trace_ELBO to train a CVAE.

And I got negative losses after a few epochs. I’m pretty sure the issue is with my input because after I use random normalized input, the losses are not negative anymore.
It really confuses me since this loss shall be minus ELBO thus positive. So it would be great if someone could kindly let me know under what circumstance this loss can be negative.


the elbo is a lower bound to the log evidence log p(data).

if e.g. data is continuous, call it X, then and you change the units of X by sigma then log p(data) changes by an additive factor of log(sigma). as such the elbo can take any sign. so negative elbos are fine in general.

Got it.

Just a follow-up question, so if the loss can be negative, I keep minimizing it until it becomes a negative value with a very large absolute value… It could cause the gradient eventually explode. Are there any suggestions you would give to avoid such situations?

Thank you very much :slight_smile:

it’s hard to say in general. especially without details. see here for some general tips.

keep in mind that the ELBO of a vanilla VAE like in the original kingma paper is actually unbounded since the scale of the Normal likelihood is allowed to go to zero. as such the optimization problem isn’t actually well-defined. so you need to think about different limits of your likelihood, examine what’s happening to scale or analogs, lower bound scale to be above 0.01 or similar, etc etc etc

Yes, I’ve read this page also. It’s quite helpful.

Thanks again for your kind help. Have a nice day :wink: