Hi,
I am currently trying out Deep Kernel Learning using pyro by following the tutorial (Example: Deep Kernel Learning — Pyro Tutorials 1.8.4 documentation). I changed the warping kernel according to the kernel mentioned in the following paper (http://proceedings.mlr.press/v51/wilson16.pdf) which results in:
class WarpCore(nn.Module):
def __init__(self, dims):
super(WarpCore, self).__init__()
self.fc1 = nn.Linear(dims, 1000)
self.fc2 = nn.Linear(1000, 500)
self.fc3 = nn.Linear(500, 50)
self.fc4 = nn.Linear(50, 2)
def forward(self, x):
x = torch.relu(self.fc1(x))
x = torch.relu(self.fc2(x))
x = torch.relu(self.fc3(x))
x = self.fc4(x)
return x
Which works pretty fine. Now I was thinking of including an nn.Embedding
layer into the WarpCore
such as:
class WarpCore(nn.Module):
def __init__(self, dims):
super(WarpCore, self).__init__()
self.embs = nn.Embedding(1000, 1000)
self.fc1 = nn.Linear(1000, 1000)
self.fc2 = nn.Linear(1000, 500)
self.fc3 = nn.Linear(500, 50)
self.fc4 = nn.Linear(50, 2)
def forward(self, x):
x = torch.relu(self.fc1(self.embs(x)))
x = torch.relu(self.fc2(x))
x = torch.relu(self.fc3(x))
x = self.fc4(x)
return x
This on its own also works fine until I get to the part where I need to specify the inducing points.
batches = []
for i, (data, _) in enumerate(train_loader):
batches.append(data)
if i >= ((args.num_inducing - 1) // args.batch_size):
break
Xu = torch.cat(batches)[:args.num_inducing].clone()
For me its not quite clear how to specify the inducing points as they will be “produced” by the embedding layer. Also the embedding layer requires me to pass a tensor of type torch.long
whereas Xu
needs to be a float
as it is defined as self.Xu = Parameter(Xu)
in vsgp.py
.
I was thinking that maybe someone else tried something similar and could give me a pointer on where to look or how to do that?
Thanks …