GPLVM Reconstructing data


I just started to use pyro and I am following your tutorial on GPLVM. I have a question about reconstructing data from the latent space. Let’s say I trained a model, data space is M dimensional , and the latent space is in 2D. Is it possible in the current version of pyro-GPLVM to map a sample from latent space to the original space?


I guess I can use the base model for reconstruction, for instance I used SparseGPRegression Model:

sGPR = gp.models.SparseGPRegression(X, y, kernel, Xu, noise=torch.tensor(0.01), jitter=1e-5)

gplvm = gp.models.GPLVM(sGPR)
losses = gp.util.train(gplvm, num_steps=4000)

To reconstruct a sample from the latent space I can do?

gplvm.mode = “guide”
loc1, var1 = sGPR.forward(torch.Tensor([[0.1, 0.1]]), full_cov=True, noiseless=True)
loc2, var2 = sGPR.forward(torch.Tensor([[0.1, 0.1]]), full_cov=True, noiseless=True)

But each time I run the last part the values in loc1 and loc2 are different. Does this mean the model continues to train?

Hi @sinanmut, that’s a great question! Each time we run the forward pass, the parameter X is drawn from the guide distribution, so we get a new X. Because SGPR uses X in the forward method, you will get a new loc, var. You can fix X to a sample (got from the guide) by deleting prior of X

gplvm.mode = "guide"
del gplvm._priors['X']

but I don’t think that it is good to make prediction with just 1 sample of X from its posterior. I think that it is better to set X to its posterior mean (the value of gplvm.X_loc) or use VariationalSparseGP class (which does not require X for prediction).