 # Compute Matrix of Interaction Terms

Hey there,

given a matrix `z` of shape N x P (samples x features), I’d like to construct a matrix with all possible interaction terms, i.e. the final result should be a matrix of size N x (P/2 * (P+1)), e.g., for a `z` with 3 feature columns: `[z1, z2, z3, z1z2, z1z3, z2z3]`.

I tried the following:

``````def apply_along_axis(function, x, axis=1):

def f(x):
y=torch.triu(torch.outer(x,x), diagonal=1).flatten()

z = torch.rand(10, 3)
zxz = apply_along_axis(f, z, 0)   # shape: torch.Size([10, 6])
``````

This works like a charme. I compute the outer-product (row wise), and concatenate the upper triangular part of the resulting matrix (without the diagonal) to the input row.

However, when I increase the number of particles in my `pyro.infer.Trace_ELBO` instance, I observe another dimension on the left of my matrix `z`, which breaks the code (since the `apply_along_axis` does not pass the correct tensor into `f` anymore). Any ideas, how I can construct this interaction matrix independent of the number of particles?

PS: I also use `vectorize_particles=True` which works fine in all other parts of the code (but in those cases, I am usually in a plate statement).
PPS: For interested readers, there is also https://pyro.ai/examples/sparse_regression.html

focus your attention on creating an appropriate mask and compute something like

`(z.unsqueeze(-1) * z.unsqueeze(-2))[mask]`

``````x = z.unsqueeze(-1) * z.unsqueeze(-2)