Hello community!
I am trying to solve a problem without analytical derivation (blackbox problem) and I am trying to apply differential difference for my likelihood function. I have checked that it should be done with pyro.factor but I don’t really understand the documentation of the primitive.
The aim of the problem is to estimate 3 parameters (E, ρ, η) based on experimental data
So the main goal is to obtain f(E, ρ, η) = Experimental data. Since f(E, ρ, η) is calculated in a simulation program, I don’t have the gradient information for it. That is why I am trying to implement a custom likelihood
The model and guide of the problem are the ones following:
def model(x, y_obs):
# Density definition
rho_mean = pyro.param("rho_mean", torch.tensor(0.))
rho_std = pyro.param("rho_std", torch.tensor(1.), constraint=constraints.positive)
rho = pyro.sample("rho", dist.Normal(rho_mean, rho_std))
# Damping loss factor definition
eta_mean = pyro.param("eta_mean", torch.tensor(0.))
eta_std = pyro.param("eta_std", torch.tensor(1.), constraint=constraints.positive)
eta = pyro.sample("eta", dist.Normal(eta_mean, eta_std))
# Young's modulus definition
E_mean = pyro.param("E_mean", torch.tensor(0.))
E_std = pyro.param("E_std", torch.tensor(1.), constraint=constraints.positive)
E = pyro.sample("E", dist.Normal(E_mean, E_std))
mobcalc = mobilityCalc.apply
y_simul = simulation(E, rho, eta)
with pyro.plate("data", len(y_obs)):
pyro.factor("regularizer", dist.Normal(mobcalc([E, rho, eta]), 0.01).log_prob(y_obs))
def guide(x, y_obs):
# Density guide
rho_mean_q = pyro.param("rho_mean_guide", torch.tensor(0.))
rho_std_q = pyro.param("rho_std_guide", torch.tensor(0.001), constraint=constraints.positive)
pyro.sample("rho", dist.Normal(rho_mean_q, rho_std_q))
# Damping loss factor guide
eta_mean_q = pyro.param("eta_mean_guide", torch.tensor(0.))
eta_std_q = pyro.param("eta_std_guide", torch.tensor(0.001), constraint=constraints.positive)
pyro.sample("eta", dist.Normal(eta_mean_q, eta_std_q))
# Damping loss factor guide
E_mean_q = pyro.param("E_mean_guide", torch.tensor(0.))
E_std_q = pyro.param("E_std_guide", torch.tensor(0.001), constraint=constraints.positive)
pyro.sample("E", dist.Normal(E_mean_q, E_std_q))
I have also defined the class for the log_likelihood based on this tutorial: defining autograd functions:
class mobilityCalc(torch.autograd.Function):
"""
We can implement our own custom autograd Functions by subclassing
torch.autograd.Function and implementing the forward and backward passes
which operate on Tensors.
"""
@staticmethod
def forward(ctx, input):
"""
In the forward pass we receive a Tensor containing the input and return
a Tensor containing the output. ctx is a context object that can be used
to stash information for backward computation. You can cache arbitrary
objects for use in the backward pass using the ctx.save_for_backward method.
"""
ctx.save_for_backward(input)
E, rho, eta = input
return simulation(E, rho, eta)
@staticmethod
def backward(ctx, grad_output):
"""
In the backward pass we receive a Tensor containing the gradient of the loss
with respect to the output, and we need to compute the gradient of the loss
with respect to the input.
"""
input, = ctx.saved_tensors
E, rho, eta = input
dE = torch.autograd.grad(simulation(E, rho, eta), E, grad_outputs=grad_output, create_graph=True)[0]
drho = torch.autograd.grad(simulation(E, rho, eta), rho, grad_outputs=grad_output, create_graph=True)[0]
deta = torch.autograd.grad(simulation(E, rho, eta), eta, grad_outputs=grad_output, create_graph=True)[0]
return dE, drho, deta
The problem is that I have discovered that the backward method of mobilityCalc is not used. The posterior are getting updated(so gradient is not zero), but I don’t know based on what. Do you know what could be wrong in this case?
I didn’t find an example of how to use the factor primitive so I don’t know if I am using it properly. And I am not familiar with custom definition of pyro objects
Thanks a lot in advance!!
T