Back-propagation In SVI

Hi @fehiepsi ,

I am following This implementation for the classification using Bayesian network.

Like we use
loss.backward() optim.step()
for back-propagation in normal neural network. In Bayesian implementation we have

svi = SVI(self.model, self.guide, optim, elbo)
Where optim pyro.optim function .
There is no optim.step or loss.backward() step it is being done internally or this implementation is wrong

backward() and step() are called internally by SVI.

Hi @eb8680_2 Thank you for the response