Bayesian network workflow for dummies

Hi everyone, I’m a beginner at probabilistic programming.

I’m trying to learn parameters of Bayesian network from data.
Then I want to use that trained network to infer one discrete variable while observing all of the others.

So what did I do:

  1. I created model(data) which represents my Bayesian network and generated some data to see if priors have any sense. The generated data seems ok.

  2. I used pyro.param() in the model() for representing conditional probability tables aka parameters of the network

  3. I used AutoGuide() to create guide

  4. I used SVI with Trace_ELBO to train model, and the loss function converged. The parameters looks ok.

  5. Here is where I’m stuck. I want to now condition model on all of the variables except one which is modeled as dist.Bernoulli(p) distribution.
    My approach was to create new model trained_model() and this time use trained parameters for conditional probability tables as torch.tensor() and not pyro.param(). Only the parameter p is pyro.param(). Again run SVI with Trace_ELBO() and it doesn’t quite work for me. (In the sense that inferred values are different for the same observed data if inference is ran multiple times)

Is this this approach ok ? Is there more elegant way use trained models ? My approach seems logical to me but not very elegant. Any feedback or directions to something useful would be appreciated.

I’m sorry that the post is quite long for what it seems to be a trivial problem.