Hi @fehiepsi ,
I am following This implementation for the classification using Bayesian network.
Like we use
loss.backward() optim.step()
for back-propagation in normal neural network. In Bayesian implementation we have
svi = SVI(self.model, self.guide, optim, elbo)
Where optim pyro.optim function .
There is no optim.step or loss.backward() step it is being done internally or this implementation is wrong