Hi everyone, I’m a beginner at probabilistic programming.
I’m trying to learn parameters of Bayesian network from data.
Then I want to use that trained network to infer one discrete variable while observing all of the others.
So what did I do:

I created
model(data)
which represents my Bayesian network and generated some data to see if priors have any sense. The generated data seems ok. 
I used
pyro.param()
in themodel()
for representing conditional probability tables aka parameters of the network 
I used
AutoGuide()
to create guide 
I used SVI with Trace_ELBO to train model, and the loss function converged. The parameters looks ok.

Here is where I’m stuck. I want to now condition model on all of the variables except one which is modeled as
dist.Bernoulli(p)
distribution.
My approach was to create new modeltrained_model()
and this time use trained parameters for conditional probability tables astorch.tensor()
and notpyro.param()
. Only the parameterp
ispyro.param()
. Again run SVI withTrace_ELBO()
and it doesn’t quite work for me. (In the sense that inferred values are different for the same observed data if inference is ran multiple times)
Is this this approach ok ? Is there more elegant way use trained models ? My approach seems logical to me but not very elegant. Any feedback or directions to something useful would be appreciated.
I’m sorry that the post is quite long for what it seems to be a trivial problem.