Hello,
I am relatively new to this language and concept, so forgive me for my naivete.
I am attempting to use the inferential framework allowed by Pyro. I am finding the parameterized distributions of PyTorch to be relatively intuitive. The model/guide framework up to this point seems to run without issue, but I believe I have a “kink”, so to speak, to the loss back-propagation (since I cannot converge my model to the expected output).
In my model/guide I have functional mapping of a->a'
. For effect, this map can be represented as a square, 2D matrix with a
in the rows and a'
in the columns. Each value of this matrix represents a probability of a
being observed as a'
. I am hoping to utilize this matrix P(a|a')
and its easily computable cousin P(a'|a)
to infer the input of my model given a certain output measurement further down the line.
In my efforts, I have come across the concept of PyTorch Transformed Distributions
, but I am uncertain of their utility in this situation. Additionally, I have looked into Categorical distributions.
Any tips/tricks/help would be very much appreciated!