Hi !

I am sort of newcomer to Gaussian processes so most likely my problem is trivial.

Assume that i have 100 entries of 8 dimension data in Y.

I now use GPLVM to get a latent space representation. This is X with 100 entries of dimension 2.

So a reduction from dimensionality 8 to 2.

Inside the magic of the GPLVM - as i understand - 8 Gaussionen process with 2 dimensional data has been fitted.

What I dont understand is the reconstructing process. Or rather if/GPLVM’s are relevant for compression.

To reconstruct Y from X i need to do something like “gplvm.forward(gplvm.X_loc)” . So, a forward call on my trained GPs.

But does that gplvm object not contain means and covariances that require the same space as the Y I am trying to compress ?

I understand that I am getting a latent space representation with X. But I dont see if there is any compression to be gained ?

Thanks !