Hi
The examples in the Pyro docs usually run a given number of iterations. Is there any convenient convergence/stopping criterion code lurking in Pyro, e.g. tolerance of norm of gradient of the ELBO (or whatever Stan is using)?
Thanks in advance,
Hi
The examples in the Pyro docs usually run a given number of iterations. Is there any convenient convergence/stopping criterion code lurking in Pyro, e.g. tolerance of norm of gradient of the ELBO (or whatever Stan is using)?
Thanks in advance,
depending on what you want to do you can look at the grad of the loss or use a learning rate scheduler though i think you’d have to use the dev
version of pyro since it fixes a surfaced bug.
Thanks! But, a look at the lr scheduler code in PyTorch suggests that they don’t have any sort of stopping criterion. I’ll take a look at the grad of the loss.
did you mean the differentiable_loss
method? it seems to be dev only, and the loss
& loss_and_gradients
methods return float
s.
did you mean the differentiable_loss method
yes and yep you’ll have to use dev until the next release.