Compute Matrix of Interaction Terms

Hey there,

given a matrix z of shape N x P (samples x features), I’d like to construct a matrix with all possible interaction terms, i.e. the final result should be a matrix of size N x (P/2 * (P+1)), e.g., for a z with 3 feature columns: [z1, z2, z3, z1z2, z1z3, z2z3].

I tried the following:

def apply_along_axis(function, x, axis=1):
    return torch.stack([function(x_i) for i, x_i in enumerate(torch.unbind(x, dim=axis), 0)], dim=axis)

def f(x):
    y=torch.triu(torch.outer(x,x), diagonal=1).flatten()
    return torch.cat((x, y[y!=0]))

z = torch.rand(10, 3)
zxz = apply_along_axis(f, z, 0)   # shape: torch.Size([10, 6])

This works like a charme. I compute the outer-product (row wise), and concatenate the upper triangular part of the resulting matrix (without the diagonal) to the input row.

However, when I increase the number of particles in my pyro.infer.Trace_ELBO instance, I observe another dimension on the left of my matrix z, which breaks the code (since the apply_along_axis does not pass the correct tensor into f anymore). Any ideas, how I can construct this interaction matrix independent of the number of particles?

Thanks in advance!

PS: I also use vectorize_particles=True which works fine in all other parts of the code (but in those cases, I am usually in a plate statement).
PPS: For interested readers, there is also https://pyro.ai/examples/sparse_regression.html

focus your attention on creating an appropriate mask and compute something like

(z.unsqueeze(-1) * z.unsqueeze(-2))[mask]

to get the quadratic bit

1 Like

Awesome, that’s a great idea. For anybody else, here is a possible solution:

x = z.unsqueeze(-1) * z.unsqueeze(-2)
zxz = torch.cat((z, x[:, torch.tril_indices(K, K, -1)[0], torch.tril_indices(K, K, -1)[1]]), axis=1)