Custom Constraint about part of a parameter

Hello,

I have this parameter called x_value inside the guide that is 4 dimensional and depends on the parameter ‘prior’.

Now I want to have this specific constraint:
rod = (prior[2] - prior[0]) ** 2 + (prior[3] - prior[1]) ** 2
where rod is in the interval 0.9 < rod < 1.1.

    x_value = pyro.param("x_value_{}".format(t), Variable(prior, requires_grad=True))

How would it be possible to implement this kind of constraint?

I tried introducing an additional parameter like this:

   rod = (prior[2] - prior[0]) ** 2 + (prior[3] - prior[1]) ** 2
   rod_param = pyro.param("rod_{}".format(t), Variable((rod, requires_grad=True), constraint=dist.constraints.interval(2*rod_radius-d, 2*rod_radius+d))

    x_value = pyro.param("x_value_{}".format(t), Variable(prior, requires_grad=True))

But this does not seem to constrain x_value at all. I also tried introducing the rod parameter after x_value but this has no effect. They do not seem to link and therefore I am not able to introduce this constraint.

Thanks in advance for your answer.

Let’s see, if I understand correctly do you want to learn values of prior that satisfy the following inequality constraint?

0.9 < (prior[2] - prior[0]) ** 2 + (prior[3] - prior[1]) ** 2 < 1.1

If so I’m not sure there’s an easy way to do this directly in Pyro. Let me first try to explain how Pyro’s constraint system works, then suggest some possible solutions.

Pyro’s constraint system allows you to learn parameters obeying simple constraints by transforming those parameters into unconstrained space and instead learning an unconstrained variable. E.g. if we want to learn a positive parameter x, we can instead learn an unconstrained parameter x_unconstrained and deterministically compute x = x_unconstrained.exp(). When you write pyro.param("x", torch.tensor(1.0), constraint=positive), Pyro will under the hood create a different parameter “x_unconstrained”, and give you not a parameter but a deterministic function of that parameter at each learning step.

A limitation of Pyro’s constraint system is that you can’t easily combine or conjoin constraints as you would write in a constrained optimization problem. However, while Pyro doesn’t allow you to combine hard constraints, it makes it very easy to combine Bayesian priors, which can be seen as softer versions of hard constraints. To combine priors, you can simple add pyro.sample statements to your program.

One possible solution to your problem would be to relax your rod constraint to a prior, e.g. here’s a Gaussian relaxation

rod = (prior[2] - prior[0]) ** 2 + (prior[3] - prior[1]) ** 2
pyro.sample("rod_prior", dist.Norma(1.0, 0.1), obs=rod)

or similarly you could add a pair of factor statements, e.g. here’s a softplus relaxation

rod = (prior[2] - prior[0]) ** 2 + (prior[3] - prior[1]) ** 2
temperature = 0.1  # relaxation hyperparameter
pyro.factor("rod_lb", -torch.softplus((0.9 - rod) / temperature))
pyro.factor("rod_ub", -torch.softplus((rod - 1.1) / temperature))

Another solution might be to do some algebra and try to express say prior[2] as a function of prior[0] and then prior[3] as a function of prior[0:2]. This would preserve the hard constraint in your example, but as your models become more complex this kind of trick won’t scale. Here’s some actual math for your example:

prior_0 = pyro.param("prior_0", torch.tensor(0.))  # unconstrained
prior_2_minus_0 = pyro.param(
    "prior_2_minus_0",
    torch.tensor(0.0),
    constraint=constraints.interval(-0.1**0.5, 0.1**0.5),
)
prior_2 = prior_0 + prior_2_minus_0
prior_1 = pyro.param("prior_1", torch.tensor(0.))  # unconstrained
prior_3_minus_1_scaled = pyro.param(
    "prior_3_minus_1_scaled",
    torch.tensor(0.),
    constraints=constraints.unit_interval,
)
prior_3 = prior_1 + prior_3_minus_1_scaled * (1 - prior_2_minus_0.abs())
prior = pyro.deterministic(
    "prior", torch.cat([prior_0, prior_1, prior_2, prior_3], -1)
)

Yikes it looked simpler to relax!

1 Like