Thank you.
Then, do you suggest to allow the DataLoader
to provide input with a shape (batch_size x num_classes, seq_len, h_in)
?
Or do you suggest to change the input tensor (batch_size, seq_len, h_in)
originally coming from the DataLoader
instead?
In particular, I found a function in Pytorch (sorry, but I’m quite new also to Pytorch, since I was accustomed to Keras up to now) called torch.repeat
. Is, in your educated opinion, correct to use it to expand the batch_size
dimension of input of a factor num_classes
? Specifically, by reusing the code in the example above this could amount to do as in the following:
input = torch.randn(32, 40, 50)
input = input.repeat(2,1,1)
input. shape = torch.Size([64, 40, 50])
In your opinion, is “semantically” correct to repeat the content to expand the batch_dimension
by a factor num_classes
?
As an equivalent way to proceed, what about the following one where I first expand the input tensor by a factor equal to num_classes
in the leftmost dimension and then I reshape to put it in the desired shape (64, 40, 50)?
input = torch.randn(32, 40, 50)
input = input.expand(2,-1,-1,-1).contiguous().view(-1,40,50)
I don’t know if the two solutions above amount to the same thing actually…