Bayesian GPLVM - Latent dimension fixed at 2?

Hi there,

I noticed that the Bayesian GPLVM class doesn’t allow the user to have a latent dimension (X) of anything apart from 2. Why is that?

  def __init__(self, base_model, name="GPLVM"):
        super(GPLVM, self).__init__(name)
        if base_model.X.dim() != 2:
            raise ValueError("GPLVM model only works with 2D latent X, but got "
                             "X.dim() = {}.".format(base_model.X.dim()))
        self.base_model = base_model
        self.y = self.base_model.y

        self.X_loc = Parameter(self.base_model.X)

        C = self.X_loc.shape[1]
        X_scale_tril_shape = self.X_loc.shape + (C,)
        Id = torch.eye(C, out=self.X_loc.new_empty(C, C))
        X_scale_tril = Id.expand(X_scale_tril_shape)
        self.X_scale_tril = Parameter(X_scale_tril)
        self.set_constraint("X_scale_tril", constraints.lower_cholesky)

        self._call_base_model_guide = True

If I just remove that constraint would it still work ?

Hi @vr308, if you have a 1D input X, then you can simply do X.unsqueeze(1) to make it 2D.

Thanks, but I wanted to set the latent dimension to the same as the original data ‘y’ so much higher than 2.

I see. Currently, batch GP is not supported. I haven’t had time to address that feature yet.