Hi, i’m wondering what would be the most straightforward / high-level / accessible way to implement custom objectives for SVI in numpyro (reference example in pyro)

I’m trying to implement various experimental l1, l2 regularizers for SVI-based GLM.

Best i can think of rn is interfering somewhere into:

```
svi_state, loss = svi.stable_update(svi_state, data)
params = svi.get_params(svi_state)
# then regularize the params in some whay
# then update the params in svi state manually
```

it depends what you’re doing but you can add arbitrary terms to the objective via `factor`

statement

I want to keep a state between the updates (imagine simple L2, but with different multiplier on each iteration). Which is think is challenging to do in a clean way.

Could you please give me an example, say, basic L2 (or at least something)?

This `factor`

primitive is not particularly documented (neither in numpyro nor in pyro). Best i could find is this Custom Loss Function Implementation - #4 by fritzo and the author uses it inside the guide, not the objective per se.

the elbo is basically of the form `log model_density - log guide_density`

.

a `factor`

statement in the model effectively adds the given (log) factor to the model log density.

a `factor`

statement in the guide effectively adds the given (log) factor to the guide log density.

so if you add e.g. `pyro.factor('my_factor', torch.tensor(1.23))`

to your `model`

the elbo objective will be increased by `1.23`

.

and if you add `pyro.factor('my_factor', torch.tensor(1.23))`

to your `guide`

the elbo objective will be decreased by `1.23`

.

Hi martinjankowiak,

Is there another way to do this differently in numpyro, such as

elbo_loss_fn = pyro.infer.Trace_ELBO().differentiable_loss

def loss_fn(data):

elbo_loss = elbo_loss_fn(model, guide, data)

x_loc = pyro.param(“x_loc”)

reg_loss = L2_regularizer(x_loc)

return elbo_loss + reg_loss

or implementing a custom loss such as in

https://pyro.ai/examples/custom_objectives.html#Example:-KL-Annealing