Conventional loss with elbo loss

Hi,

I wanted to put priors on some of the layers in the network and learn the parameters, pyro allows to do this. However I want to train the network in the following manner, I was wondering if this is possible and if the training strategy has some flaw that I am overlooking (according to me, this might help me train faster)

say for 5 iterations I sample a set of parameters and optimize for Elbo using guide and model functions

then for the next 2 iterations, I sample a set of parameters and optimize MSE/ Cross-entropy/ etc

edit:

I realize this might be more helpful than the above, can I optimize Elbo + K*MSE (etc) where K is some hyper -param ?

Any hints will be appreciated.

Thanks!

It seems what you want to optimize is the elbo of a Bayessian network with loss function loss + K*MSE, where loss is the loss function of your original network.

Thus it should be ok if loss + K*MSE makes sense for an ordinary network.

the easiest way to do this would be to add an additional observe statement using a normal probability distribution in your model (note that the log prob of a normal distribution gives you a quadratic term; the constant K would then effectively be determined by the variance of the normal distribution)

1 Like

I didn’t get you.

ELBO itself is a loss term right? I want to add some other terms in that loss function.

Thank you for the suggestion. I was able to make it work using this,

1 Like