Bayesian Curve Fitting for dummies

Hello everyone,

I finally had time to dive into Pyro, but I am still a complete beginner (little experience with PyMC). I am trying to do curve fitting with Pyro to estimate parameters confidence interval and priors.
I have a continuous function with parameters that I want to estimate, the initial code was to build deterministic epidemiology compartmental models, but I simplified it for the sake of the example.

Let’s imagine I have a continuous Gaussian function depending on parameters (but really that could be any function).The function I want to estimate has parameters a = 1000, mu = 50, sigma = 10

def gaussian_fn(X, a, mu, sigma):
  y = a * np.exp(-0.5 * ((X-mu)/sigma)**2)
  return y

x = np.linspace(0,100)
data = torch.Tensor(gaussian_fn(x,1000,50,10))

Here is the very simple code I wrote to try to estimate mu=50 using SVI, starting from a prior at 40

mu_prior = 40
mu_true = 50

def model(data):
    mu = pyro.sample("mu", dist.Normal(float(mu_prior), 10.0))
    pred = torch.Tensor(gaussian_fn(x,1000,float(mu),10)) / 1000
    sigma = torch.tensor(0.3)
    with pyro.plate("obs"):
        pyro.sample("obs",dist.Normal(pred,sigma),obs = data)
        


def guide(data):
    mu_mu = pyro.param("mu_mu", torch.tensor(float(mu_prior)))
    mu_std = pyro.param("mu_std", torch.tensor(float(1.0)))
    pyro.sample("mu",dist.Normal(mu_mu,mu_std))


# clear the param store in case we're in a REPL
pyro.clear_param_store()

adam_params = {"lr": 0.01}
optimizer = Adam(adam_params)
svi = SVI(model, guide, optimizer, loss=Trace_ELBO())


losses = []
params = []
params_std = []
n_steps = 1000
for step in tqdm_notebook(range(n_steps)):
    losses.append(svi.step(y))
    params.append(float(pyro.param("mu_mu")))
    params_std.append(float(pyro.param("mu_std")))




# Visualization
myfig, (ax1, ax2, ax3) = plt.subplots(nrows = 1, ncols = 3,figsize = (15,4))
ax1.plot(losses)
ax1.set_title("ELBO")
ax1.set_xlabel("step")
ax1.set_ylabel("loss")

ax2.plot(params)
ax2.set_title("mu_mu")
ax2.axhline(mu_true,c = "red")
ax2.set_xlabel("step")
ax2.set_ylabel("value")

ax3.plot(params_std)
ax3.set_title("mu_std")
ax3.set_xlabel("step")
ax3.set_ylabel("value")

plt.show()

With this code, ELBO loss does not converge at all, despite trying many learning rate and priors, and I can’t find why :

  • Here I normalize the output of the function to a maximum of 1 before the observation, I’m not sure it is the right way to go. I’ve seen snippets normalizing with std = 1 and mean 0. I’ve seen some snippet observing the data in a Normal dist with sigma exponentially distributed (here I take a constant). What would be the right solution?
  • Is there a way not to directly observe data but to compute a loss between two time series? For example compute RMSE or more exotic loss like DTW or custom functions.
  • With the model described above, I also tried MCMC, but it’s very slow (around 10 minutes for 1000+300 runs) whereas the function computes the time series in a few ms, tell me if I’m missing something

Thanks a lot for helping a beginner! And thanks for this beautiful library!
Theo

1 Like