Bayesian model with constraints

Hi there,

I’m trying to build a model with a constrained likelihood. Simplifying it a bit, it looks like this:

B ~ N (0, I)
Y_Pred_train = X_train @ B
Y_Pred_test = X_test @ B

if (Y_pred_test > Y_test).all():
   likelihood ~ N(Y_pred_train, I, observed = Y_train)
else:
   likelihood = neg_infinite

Basically: I only want samples where (Y_pred_test > Y_test) - and for those a regular Gaussian likelihood is fine. I can code this up in emcee easily enough, but, is there an idiomatic way to implement this in Pyro and sample it through nuts?

well one simple thing you could try is using a factor statement as a soft constraint to heavily penalize the condition you don’t want satisfied

1 Like

That sounds very promising. Do you have any examples of factor being used in practice that I could look at?

well in the context of HMC the factor will just add a term to the log joint density. so e.g. you could do something like

def model(data):
    numpyro.sample("theta", ...)
    # this is positive for theta < 0
    bad = jnp.fabs(theta) - theta
    # this is negative when the constraint is not satisfied 
    # and gets more negative if the constraint is very far from being satisfied
    numpyro.factor("soft_constraint", - jnp.square(bad)) 
    numpyro.sample("obs", ..., obs=data)

note that this is a soft relaxation so the inference won’t be exact

1 Like