# Model inversion and sensitivity analysis

Hi,

Assume I have a regression model with a couple of parameters that I estimate from observed data: obs = f(data; parameters), where f is a deterministic function.
I am successfully using Pyro to estimate the posterior distribution of the parameters, which allows me to draw samples and get a posterior distribution for the observations, given input data values.

Now, I would like to invert the model, that is, getting samples of data, given observations. Of course, f is complex and I don’t know how to implement its inverse.

Also, I would like to do some sensitivity analysis, that is, get a posterior distribution for the observations as a function of one of the parameters and averaging over all values of the other parameters and the input data.

For example, if there are 2 parameters a and b, obs = f(data, a, b). If I had the joint distribution P(obs, data, a, b), model inversion would be getting \int P(data|obs) dP(a) dP(b) (marginal over the parameters) and sensitivity analysis for parameter a would be getting P(obs,a) by doing the marginalization over data and b.

Is there any documetation or tutorial on how to do this kind of analysis?

Thanks.

Hi @garjola,
Do I understand correctly that in your setup, the fundamental difference between your data and parameters is that your data are batched one per observation, whereas the parameters are shared among observations? If so you could look into amortized variational inference basically training a function to stochastically invert your f(-,-). There you would train a guide that also has parameters shared among all observations.

Hi @fritzo,

Yes, I have observations (x_i, y_i) which are generated by a deterministic function y = f(x) and this function has unknown parameters theta. The pairs of values (x_i, y_i) are independent and theta are the same for all the observations.

I think that I understand your suggestion of training a function to invert f. I have done this kind of thing using neural networks, but I was thinking that with a PPL, if I am able to sample P(x, y, theta), I should be able to sample P(x| y, theta) and therefore have the full posterior of the input of the system given its input and its parameters.

To be honest, I don’t understand well VI (I am just using MCMC). I have followed the different tutorials on the subject, but I am having a lot of trouble with the concepts used in Pyro (guide, poutine, etc.). Maybe you can suggest some ressources for that?

In the past, I have used Osvaldo Martin’s book which is PyMC3 based, but I have moved to Pyro because I am using Pytorch for other parts of my work and would like to simplify the dependencies in my software stack. There are a lot of very good introductory ressources for PyMC3 and Stan, but I feel that Pyro lacks this kind of material which allows to learn the bayesian approach at the same time as the PPL.

I understand that producing this kind of material is a huge undertaking, so do not see any criticism in these remarks. I am very grateful to the Pyro team for all the work, but I feel that Pyro is very difficult to grasp for a beginner.

I have found «ports» of Statistical Rethinking in (Num)Pyro, but a similar work for Osvaldo Martin’s book (let’s say programmer oriented) would be a great help. I started doing it myself, but I don’t have the knowledge about Pyro to go beyond the second chapter!

Anyway, I am going way off topic here!

Thanks again.

I was thinking that with a PPL, if I am able to sample P(x, y, theta), I should be able to sample P(x| y, theta)

Well even using PPLs it is a little more involved. It is true that you can use the same model code for both tasks, but you’ll need to perform different inference, e.g. training two different guides, one for P(x,y,theta) and another conditional guide for P(x|y,theta). Sometimes we can factor things so that a single guide is factored into a few parts, e.g. P(x,y,theta) = P(y,theta) P(x|y,theta) then you can use those parts in different combinations. You might look at Pyro’s semisupervised VAE example.

Re: learning material,
you might take a look at Evan Zamir’s tutorials or other tutorials posted at https://github.com/pyro-ppl/pyro/issues/1461 I believe @eb8680_2 plans to collect pointers to third-party Pyro material somewhere on https://pyro.ai.