I thought about using the Rejector class since this can be enveloped by a Normal(loc, sig) in a rejection sampler (lets call the log_prob of the Normal log_g to match with wikipedia’s rejection sampling) .
However, the docs aren’t clear on the following items:
log_prob
is log_prob_accept the full acceptance probability i.e. log_f(x) - log_g(x) - log(M)
OR is it just log_f(x)
log_scale
what is this? The docs say “Total log probability of acceptance” .
If someone could point me int he right direction I’d be very grateful.
Hi @ludkinm, first I’d recommend against using Rejector if you can avoid it. I believe your distribution can be represented as something like
EDIT the following ignores the d/dx x**2 = 2 x dx factor in the density…
If you really do want to use Rejector I’d recommend grepping around the tests/ directory for usage examples, e.g. see how the Rejection* distributions are created for use in test_rejector.py.
@ludkinm sorry I think my TransformedDistribution is wrong. I’d still recommend trying to find an analytic solution if possible.
Note the Rejector is sometimes useful for distributions used in guides it helps to use a partial reparameterization trick to get a partially differentiable sampler; however Rejector is not useful for distributions used in models. If you only need the model part, you can simply define a log_prob method and (if you need to sample from the posterior predictive) a (non-differentiable) rejection sampler.
Thanks for the pointers @fritzo. Yes I think it’s wrong too.
I only need a log_prob method and realised that for the MCMC one can just supply the log potential function.
Rejector is used to draw partially reparametrized samples. In ELBO-maximizing variational inference, samples are drawn from the guide and only scored against the model. Therefore fancy sampling machinery is only useful for distributions that appear in guides; distributions in models need only implement .log_prob().