# Custom likelihood Function with pyro factor

Hello community!

I am trying to solve a problem without analytical derivation (blackbox problem) and I am trying to apply differential difference for my likelihood function. I have checked that it should be done with pyro.factor but I don’t really understand the documentation of the primitive.

The aim of the problem is to estimate 3 parameters (E, ρ, η) based on experimental data
So the main goal is to obtain f(E, ρ, η) = Experimental data. Since f(E, ρ, η) is calculated in a simulation program, I don’t have the gradient information for it. That is why I am trying to implement a custom likelihood

The model and guide of the problem are the ones following:

``````def model(x, y_obs):
# Density definition
rho_mean = pyro.param("rho_mean", torch.tensor(0.))
rho_std = pyro.param("rho_std", torch.tensor(1.), constraint=constraints.positive)
rho = pyro.sample("rho", dist.Normal(rho_mean, rho_std))
# Damping loss factor definition
eta_mean = pyro.param("eta_mean", torch.tensor(0.))
eta_std = pyro.param("eta_std", torch.tensor(1.), constraint=constraints.positive)
eta = pyro.sample("eta", dist.Normal(eta_mean, eta_std))
# Young's modulus definition
E_mean = pyro.param("E_mean", torch.tensor(0.))
E_std = pyro.param("E_std", torch.tensor(1.), constraint=constraints.positive)
E = pyro.sample("E", dist.Normal(E_mean, E_std))

mobcalc = mobilityCalc.apply
y_simul = simulation(E, rho, eta)
with pyro.plate("data", len(y_obs)):
pyro.factor("regularizer", dist.Normal(mobcalc([E, rho, eta]), 0.01).log_prob(y_obs))

def guide(x, y_obs):
# Density guide
rho_mean_q = pyro.param("rho_mean_guide", torch.tensor(0.))
rho_std_q = pyro.param("rho_std_guide", torch.tensor(0.001), constraint=constraints.positive)
pyro.sample("rho", dist.Normal(rho_mean_q, rho_std_q))
# Damping loss factor guide
eta_mean_q = pyro.param("eta_mean_guide", torch.tensor(0.))
eta_std_q = pyro.param("eta_std_guide", torch.tensor(0.001), constraint=constraints.positive)
pyro.sample("eta", dist.Normal(eta_mean_q, eta_std_q))

# Damping loss factor guide
E_mean_q = pyro.param("E_mean_guide", torch.tensor(0.))
E_std_q = pyro.param("E_std_guide", torch.tensor(0.001), constraint=constraints.positive)
pyro.sample("E", dist.Normal(E_mean_q, E_std_q))
``````

I have also defined the class for the log_likelihood based on this tutorial: defining autograd functions:

``````class mobilityCalc(torch.autograd.Function):
"""
We can implement our own custom autograd Functions by subclassing
torch.autograd.Function and implementing the forward and backward passes
which operate on Tensors.
"""
@staticmethod
def forward(ctx, input):
"""
In the forward pass we receive a Tensor containing the input and return
a Tensor containing the output. ctx is a context object that can be used
to stash information for backward computation. You can cache arbitrary
objects for use in the backward pass using the ctx.save_for_backward method.
"""
ctx.save_for_backward(input)
E, rho, eta = input
return simulation(E, rho, eta)

@staticmethod
"""
In the backward pass we receive a Tensor containing the gradient of the loss
with respect to the output, and we need to compute the gradient of the loss
with respect to the input.
"""
input, = ctx.saved_tensors
E, rho, eta = input
return dE, drho, deta
``````

The problem is that I have discovered that the backward method of mobilityCalc is not used. The posterior are getting updated(so gradient is not zero), but I don’t know based on what. Do you know what could be wrong in this case?
I didn’t find an example of how to use the factor primitive so I don’t know if I am using it properly. And I am not familiar with custom definition of pyro objects

T

1. If you want to have a fixed prior in the model you shouldn’t use `pyro.param`s since they will get updated, e.g.:
``````-rho_mean = pyro.param("rho_mean", torch.tensor(0.))
-rho_std = pyro.param("rho_std", torch.tensor(1.), constraint=constraints.positive)
+rho_mean = torch.tensor(0.). # fixed prior params
+rho_std = torch.tensor(1.). # fixed prior params
rho = pyro.sample("rho", dist.Normal(rho_mean, rho_std))
``````
1. I believe in your case you can just use `pyro.sample` primitive with `obs` kwarg instead of `pyro.factor`:
``````-pyro.factor("regularizer", dist.Normal(mobcalc([E, rho, eta]), 0.01).log_prob(y_obs))
+pyro.sample("regularizer", dist.Normal(mobcalc([E, rho, eta]), 0.01), obs=y_obs)
``````

That will add log-likehood cost term to an overall cost objective.

can you give more details on `(simulation(E, rho, eta)`? i’m assuming it’s non-pytorch code that does something like this:

``````def simulation(E, rho, eta):
return math.log(E.item())
``````

Hi Martin,

It’s a blackbox function. Y = f(E, ρ, η) is calculated from outside and I just receive the array of numbers(Y) back.

That is why I am trying to implement something like this. There is a post talking about this problem(the black box one) but there is not solution more than creating a custom loss function. And here I am, trying to do it

yes that’s what i thought. unfortunately your approach makes no sense. if it’s black box why would you expect this to work?

``````torch.autograd.grad(simulation(E, rho, eta), E, grad_outputs=grad_output, create_graph=True)
``````

the whole point is that torch cannot follow what’s going on in your function. it has no idea. it just sees some float output. nothing more.

as shown in that tutorial you need to provide an actual numerical quantity. either via an explicit formula

``````    def backward(ctx, grad_output):
"""
In the backward pass we receive a Tensor containing the gradient of the loss
with respect to the output, and we need to compute the gradient of the loss
with respect to the input.
"""
input, = ctx.saved_tensors
return grad_output * 1.5 * (5 * input ** 2 - 1)
``````

or you can compute finite differences of your function and use that.

incidentally defining custom gradients is more of a torch question and would probably be better suited for the pytorch forum.