Penalize Non-Latent Variables in the Model-Function

Hi Pyro-Devs,

I was wondering what is the recommended way to penalize non-latent variables in the model() function?
Why? I would like to penalize a transformed version of the residuals from a Bernoulli likelihood, but a Bernoulli sample statement does not explicitly model the residuals. However, I can compute them myself.

My goal looks like this:

def model(self, X, y):
    ...
    
    mean = X * beta
    pyro.sample(
        "observations",
        dist.Bernoulli(probs=logits_to_probs(mean)),
        obs=y,
    )
    
    def f():
        ... 
    trans_residuals = f(logits_to_probs(mean) - y) 
    p = lambda * torch.linalg.norm(trans_residuals, ord=2)

    # Increase ELBO by `p`
    ?

Now, Iā€™m wondering how to replace the ā€˜?ā€™ best/correctly or if there is another go-to approach?

I already tried recomputing the residuals in the guide by taking the medians of beta and using the pyro.factor() primitive to penalize the transformed residuals, but this has the major drawback that I need to sample and recompute the mean from the model()-function again, and additionally it does not work well (no changes visible).

Thanks in advance!