# Tutorial example problem

I’m reading the tutorial, and the first example with weather was 100% okay, but this one:

``````def scale(guess):
weight = pyro.sample("weight", dist.Normal(guess, 1.0))
return pyro.sample("measurement", dist.Normal(weight, 0.75))
``````

Talks about a noisy scale, and model that as a random normal, with its mean being a sample from a normal around our guess. I spent an hour trying to fathom this generative model and makes no sense at all to me. Can any one please explain it, or express agreement with my position?

can you formulate your confusion as a specific question? intuitively, this means that you have a noisy `guess` about what you think the weather will be. then you measure the weather that you believe to be a normal distribution around `weight` which is conditioned on your initial guess.

Hi. Thanks for your answer. I think you confused the weather example with the noisy scale example, the two are different. What I’m saying is that it doesn’t make sense to say that scale measurement is drawn from a Gaussian centered at value drawn from a Gaussian centered around my guess. It is just far-fetched if not implausible. The generative models that we assume are pretty much random, aren’t they?

Hello there,
From what I understand, the variable weight corresponds to the real physical value, which of course is not generated as a Gaussian around your guess. The normal distribution in this case represents a prior, i.e. a preliminary idea you have of the true value. If you think all values are equally likely, that is fine by me, but if you weigh a loaf of bread, you can expect something in the range of 100 grams rather than 100 tons. Stating this assumption as a prior often makes inference easier, at least when the prior is reasonable. Do I make any sense?

1 Like

@Bayes can you explain what you think a better generative model would be? it’s quite simplistic to introduce the concepts related to modeling, but the premise makes sense in my opinion. the goal of your generative model is to bridge your observations to your prior, this is just one way to do it.

1 Like

Yes. You make perfect sense as far as the prior is concerned. My issue is with the scale output. I understand that it is a noisy scale, so will throw out a sample from a random normal. My problem is with this normal being centered around a random sample from the prior ! should be centered around something like the real value! I don’t know! what kind of noise is hitting that scale?

I’d say the scale output is a random normal around the real value, or around our guess. But not around a random sample drawn from our prior.

The thing is, the scale output is generated following a normal distribution around the real value, which is `weight`. Saying that this real value was drawn from our prior is simply a way to penalize unlikely observations, for instance when you write the log-likelihood function. It doesn’t mean that the actual weight of the object fluctuates around our guess.