More detailed on converting MAP to an SVI problem

Hello, I am trying to implement Sequential Line search, but is not sure what’s the right way to do so.
Simply speak, the model approximates the user preference g over a parameter space (think about photo editing with all the parameters), given a user interaction with slider using [Bradley–Terry model] (Bradley–Terry model - Wikipedia) (User picks a point over the two ends of the slider, probability of user choosing an arbitrary point in the slider is formulated as g_chosen/(g_start + g_end). In the paper, they assume the user preference to each choice as latent variable, and use MAP estimation to maximize the posterior. I am aware that pyro is capable of doing MAP but the only information I can find is an opening issue and a closed PR.
In particular, I don’t understand

  1. where should I compute the posterior distribution? As I go through all the tutorials, the posterior seems to be defined in the guide, but it is not quite sure how should I feed it to the SVI object since the model, guide in many tutorials never return anything.
  2. In case of MAP estimation, in the issue on github it says that we should use a delta distribution as a posterior but I am not exactly sure the reason behind this. In my understand MAP tries to Maximize a Posterior, meaning that we are capable of evaluating the posterior in closed form anyway, why do we need to set it as a delta distribution?
  3. The problem (I think) can be formulated as an SVI problem but I don’t know how to systematically convert a MAP problem to an SVI one.

Any help will be appreciated.

Below is my attempt to build the model.

self.kernel = RBF(self.input_dim, variance=torch.Tensor(
    [1]), lengthscale=torch.Tensor([0.5]))
self.kernel.set_prior('variance', dist.LogNormal(torch.log(0.5), 0.1))
self.kernel.set_prior(
    'lengthscale', dist.LogNormal(torch.log(0.05), 0.1))
self.y = Parameter(torch.randn(self.num_obs))
noise = Parameter(torch.tensor([1]))

def model(self, data):
    
    Kff = self.kernel(data) + noise.expand(data.shape[1]).diag()
    p_D_g = [log_BTL_likelihood(self.y[i * 3:i * 3 + 3], s=1)
             for i in range(len(self.y) // 3)]
    p_D_g = torch.exp(torch.Tensor(p_D_g).sum())
    p_g_theta = dist.MultivariateNormal(torch.zeros(self.num_obs).expand(self.num_obs * self.input_dim), Kff).reshape(
        extra_event_dims=1).log_prob(self.y)
    p_theta = LogNormal(torch.log(0.5), 0.1).log_prob(self.kernel.get_param(
        'variance')) * LogNormal(torch.log(0.05), 0.1).log_prob(self.kernel.get_param('lengthscale'))
    posterior = p_g_theta * p_D_g * p_theta # the scalar posterior that is maximized in MAP
    return

def guide(self):
    #???
    return

Hi @tobyclh,

Edward has a nice explanation for MAP. If we write down ELBO loss with Delta distribution, we can see why delta guide works.

For GP models, they are derived from Reparameterize class (I made it for automatic MAP guide generation). To see how to use it, you can take a look at the current implementation of GPR model: pyro/gpr.py at dev · pyro-ppl/pyro · GitHub .

In your example, if you want to do MAP inference for your kernel, just simply use self.kernel.set_mode("model") in your model, and self.kernel.set_mode("guide") in your guide.

@fehiepsi Thank you for the response :slight_smile:

I looked up the GPR code, it is very helpful. (and I am finally able to appreciate of usefulness of Pyro.
For class that subclass of Reparameterize, we only need to create a guide function that

  1. call pyro.sample with delta function,
  2. return all the parameters, is that correct?

If that’s the case, there is only question remaining is that I need to do MAP inference for both my kernel and the preference vector. However, I am not sure how to incorporate such abstract data, since we never have any single ‘observation’, apart from slider manipulation (which contains 3 observations and is only inferred with a BTL model.

@tobyclh If you use MAP for a subclass of Reparameterize, then in your guide, just simply call self.set_mode("guide"). It will automatically make MAP guide for its parameters (and its submodules’ parameters). Use self.get_param(param_name) to access these parameters (e.g. in your example, call y = self.get_param("y"), noise = self.get_param("noise")).

@fehiepsi Thank you!
I think I get it now.
I am preparing a very simple tutorial on MAP with pyro (for personal understanding and others who wants to know),
if you don’t mind taking a look I will post it when I get home, maybe we can clean up and make a PR.

1 Like

Hi, I’m trying to understand how to do MAP with pyro (not using SVI with Delta guide) as well. Would you mind sharing your tutorial?