Finite-difference gradients for custom distributions

Suppose I have a model that has observed data sampled from a custom distribution with a non-differentiable likelihood. Would it be feasible to compute the gradient via a finite difference method and then run inference on that model with SVI or HMC? What would that take to implement?

This is possible in PyMC: Using a “black box” likelihood function (numpy) — PyMC example gallery

if you’re using pyro/pytorch you could use something like

pyro.factor("my_custom_likelihood", my_log_prob())

you would then need to implement my_log_prob as a custom pytorch function with its own custom backward (i.e. gradient)

1 Like