Return value of pyro.sample is different inside the context of a model

@tiger and I are trying to contribute back to the excellent bayesian methods for hackers book with a pyro implementation and we’re running into some confusing behavior around pyro.sample

outside of a model:

    cheating_frequency = pyro.sample('cheating_frequency', dist.Uniform(0, 1))
    print(cheating_frequency) # => tensor(0.3947) This makes sense

but with our model code we get a different return type with this same sample

    N = 100
    n_steps = 1

    data = torch.cat((torch.ones(35, 1), torch.zeros(65, 1)))

    def model(data):
      cheating_frequency = pyro.sample(
         'cheating_frequency',
          dist.Uniform(0, 1)
      )
      print(cheating_frequency) #=> tensor([[ 1.3497]]) This does not make sense???
      true_answers = dist.Bernoulli(probs=cheating_frequency).sample((100,))
        
      first_coin_flips =  dist.Bernoulli(probs=0.5).sample((100,))
        
      second_coin_flips =  dist.Bernoulli(probs=0.5).sample((100,))
        
      observed_trues = first_coin_flips * true_answers + ( 1- first_coin_flips) * second_coin_flips
      observed_proportion = observed_trues.sum() / N
      
      for i in range(len(data)):
        pyro.sample("obs_{}".format(i), dist.Bernoulli(probs=observed_proportion), obs=data[i])

    def guide(data):
      mean = pyro.param('guide_mean', torch.randn(1, 1))
      stdev = pyro.param('guide_stdev', torch.randn(1, 1))
      cheating_frequency = pyro.sample(
         'cheating_frequency',
          dist.Normal(mean, stdev)
      )
      
    # setup the optimizer
    adam_params = {"lr": 0.0005, "betas": (0.90, 0.999)}
    optimizer = Adam(adam_params)

    # setup the inference algorithm
    svi = SVI(model, guide, optimizer, loss=Trace_ELBO())

    # do gradient steps
    for step in range(n_steps):
      svi.step(data)
      if step % 100 == 0:
        print('.', end='')

The print(cheating_frequency) statement returns tensor([[ 1.3497]])
There are two things that are confusing here:

  1. The cheating frequency type is a 2 dimensional tensor tensor([[ 1.3497]]) instead of a scalar.
  2. The value seems like it was sampled from the guide’s normal distribution, not the uniform distribution from [0,1] in the model

The documentation says the following about pyro.sample:

Pyro’s backend uses these names to uniquely identify sample statements and change their behavior at runtime depending on how the enclosing stochastic function is being used.

Is this an example of pyro changing the behavior of pyro.sample at runtime given the model context? Is pyro changing both the return type and the distribution that it’s sampling from?

1 Like

You are using variational inference, so all the value returned from pyro.sample will come from the corresponding statement in guide (through name matching, which is the first argument of pyro.sample()). The priors in model will be used to compute log_prob, not to draw a random sample.

2 Likes

Yes, for more background on SVI, see the tutorial. we monte carlo sample from the guide and score them against the model to calculate the log p term in the ELBO.

2 Likes