Is Pyro the right tool for general graphical models?

Hi again,

As I said above, the generative model can be written without trouble. However, now I want to train the parameters of my DBN by declaring them as pyro.param. While it works fine with mu and sigma, the problem lies in weights: this is a sparse matrix N*N whose structure is dictated by the adjacency matrix of my graph, and I don’t want to train N^2 parameters if I can avoid it.

Is there an equivalent of poutine.mask for parameters, so that I can specify which of them to update while keeping vectorized code?
Otherwise, I was thinking of declaring weights as a full matrix variable and putting Laplace priors to ensure sparsity, but it is much less satisfying…

Thanks in advance
Giom