At first glance, it is a bit tricky. But if you take a second to think about it, it makes sense. Weβve defined a model, and if we sample from that model, then we forward sample in order from the model priors. I.e. we first sample weight, and then we sample measurement. But, the obs parameter tells the model that we actually see that measurement is 9.5. So we replace measurement samples with the value we see. And voila: weβve sampled from the model prior, but weβve also incorporated the explicit knowledge we have from observing a variable in the real world.
This is sampling. Note that sampling is different from inference. There are different ways to perform inference, some of which use sampling-based methods. But forward sampling from a model is not the same as inference. Intuitively, you can think of forward sampling as following the order of the model forward to get data, while inference might be seen as reasoning backward from data:
To perform inference in Pyro, you have to explicitly define an inference scheme that tells Pyro how to make guesses about latent variables β in this case, weight.
TL;DR: what youβre describing is sampling, not inference. If you want weight to adjust for the fact that you observe measurement to be 9.5, then you need to define an inference scheme.
Thank you for the answer! I think I kind of understood.
Then if we plugin 9.5 in measurement, is something in βweightβ distribution changes(or disclosed?)?
I want to know what do you mean by infer weight distribution given measurement. So as far as I know, this is inspecting βweightβ distribution. Then what do we know about βweightβ distribution after conditioning the model?