Track the evolution of variational parameters (GPLVM)

I want to track the losses as well as the variaitonal parameters in a Bayesian GPLVM.

For instance in the tutorial for GPLVMs the line that does the training returns just the losses…but I am sure there is an easy way to return the whole trajectory of variational parameters of the guide corresponding to each training step?

losses = gp.util.train(gplvm, num_steps=4000)

In this case it would be gplvm.Xloc and gplvm.X_scale for every step.

@vr308 The gp.util.train just includes a few lines of PyTorch-friendly code. You can run the loop yourself to inspect those parameters.

I tried adding the lines pyro.param(‘X_loc’) and pyro.param(‘X_scale’) like below but I think I am missing something… sorry I am new to pyro.

   optimizer = (torch.optim.Adam(gpmodule.parameters(), lr=0.01)
                 if optimizer is None else optimizer)
    # TODO: add support for JIT loss
    loss_fn = TraceMeanField_ELBO().differentiable_loss if loss_fn is None else loss_fn

    def closure():
        optimizer.zero_grad()
        loss = loss_fn(gpmodule.model, gpmodule.guide)
        torch_backward(loss, retain_graph)
        return loss

    losses = []
    means = []
    scales = []
    for i in range(num_steps):
        loss, gplvm = optimizer.step(closure)
        losses.append(torch_item(loss))
        means.append(pyro.param('X_loc'))
        scales.append(pyro.param('X_scale'))
    return losses, means, scales

I think something likes that should work (you can check parameter names with list(pyro.get_params_store().keys())). Probably you will need to clone the recorded values: x -> x.detach().clone())

Thanks, so I’ve been trying different stuff …but it seems like I am not capturing the intermediate state …but only the final one…any suggestions?

def train(gpmodule, optimizer=None, loss_fn=None, retain_graph=None, num_steps=1000):
    """
    A helper to optimize parameters for a GP module.
    :param ~pyro.contrib.gp.models.GPModel gpmodule: A GP module.
    :param ~torch.optim.Optimizer optimizer: A PyTorch optimizer instance.
        By default, we use Adam with ``lr=0.01``.
    :param callable loss_fn: A loss function which takes inputs are
        ``gpmodule.model``, ``gpmodule.guide``, and returns ELBO loss.
        By default, ``loss_fn=TraceMeanField_ELBO().differentiable_loss``.
    :param bool retain_graph: An optional flag of ``torch.autograd.backward``.
    :param int num_steps: Number of steps to run SVI.
    :returns: a list of losses during the training procedure
    :rtype: list
    """
    optimizer = (torch.optim.Adam(gpmodule.parameters(), lr=0.01)
                 if optimizer is None else optimizer)
    # TODO: add support for JIT loss
    loss_fn = TraceMeanField_ELBO().differentiable_loss if loss_fn is None else loss_fn

    def closure():
        optimizer.zero_grad()
        loss = loss_fn(gpmodule.model, gpmodule.guide)
        torch_backward(loss, retain_graph)
        return loss

    losses = []
    means = []
    scales = []
    for i in range(num_steps):
        loss = optimizer.step(closure)
        losses.append(torch_item(loss))
        if i%500 == 0:
            gpmodule.mode = 'guide'
            mean = gpmodule.X_loc.detach().clone()
            scale = gpmodule.X_scale.detach().clone()
            means.append(mean)
            scales.append(scale)
    return losses, means, scales

Hmm, your code looks correct to me. I am not sure what’s wrong here. :frowning: