MAP with Rician distribution

Hi,

I would like to do MAP estimation where likelihood and prior are both Rician distributions. Since Rician distribution is not implemented, I am not sure how to efficiently do it. I want to estimate x from y such that

y|x = sqrt(z1^2 + z2^2)
x = sqrt(z3^2 + z4^2)
where zi are all independent Gaussian s.t. z1 ~ N(xcos(t), r1), z2 ~ N(xsin(t), r1), z3 ~ (vcos(t), r2), z4 ~ (vsin(t), r2).

Note that v, r1, r2 are known here.

Thanks!

Hi @keseiya, I think the easiest solution would be to implement a custom distribution that defines .log_prob() but not .sample(). Looking a the wikipedia page something like this should work:

from torch.distributions import constraints
from torch.distributions.utils import broadcast_all
from pyro.distributions.torch import TorchDistribution

class Ricean(TorchDistribution):
    arg_constraints = {"loc": constraints.positive,
                       "scale": constraints.positive}
    support = constraints.positive

    def __init__(self, loc, scale, *, validate_args=None):
        self.loc, self.scale = broadcast_all(loc, scale)
        super().__init__(self.loc.shape, validate_args=validate_args)

    def log_prob(self, value):
        s2 = self.scale.square()
        return (
            value.log() - s2.log()
            - 0.5 * (value.square() + self.loc.square()) / s2
            - torch.special.i0(value * self.loc / s2).log()
        )

To validate you could either check against another distribution implementation (e.g. scipy) or implement a .sample() method (by sampling a 2D random variable and computing its norm) and test via pyro.distributions.testing.gof() as in this test code.

If you use this in variational inference with AutoGuides, Iā€™d recommend initializing with init_to_feasible, which avoids the need to draw samples.

Thank you. This was very helpful.

What do you mean by ā€œinvariational inferenceā€?

Typo, fixed to ā€œin variational inferenceā€, in contrast to ā€œin MCMC inferenceā€

1 Like

For future readers of this thread, try this for log_prob method. The difference here is that we use torch.special.i0e instead of torch.special.i0. torch.special.i0 can return infinity easily if the input value is not very small and therefore, mess up your gradient calculation.

def log_prob(self, value):
    s2 = self.scale.square()
    return (
        value.log() - s2.log()
        - 0.5 * (value - self.loc).square() / s2
        + torch.special.i0e(value * self.loc / s2).log()
    )
1 Like