Sampling multiple values from pyro.sample


#1

I’m trying to implement bayesian methods for hacker’s chapter 2: “An algorithm for human deceit”. The algorithm is as follows:

In the interview process for each student, the student flips a coin, hidden from the interviewer. The student agrees to answer honestly if the coin comes up heads. Otherwise, if the coin comes up tails, the student (secretly) flips the coin again, and answers “Yes, I did cheat” if the coin flip lands heads, and “No, I did not cheat”, if the coin flip lands tails. This way, the interviewer does not know if a “Yes” was the result of a guilty plea, or a Heads on a second coin toss. Thus privacy is preserved and the researchers receive honest answers.

The pymc3 impelmentation is:

import pymc3 as pm

N = 100
with pm.Model() as model:
    p = pm.Uniform("freq_cheating", 0, 1)
   true_answers = pm.Bernoulli("truths", p, shape=N, testval=np.random.binomial(1, 0.5, N))
   first_coin_flips = pm.Bernoulli("first_flips", 0.5, shape=N, testval=np.random.binomial(1, 0.5, N))
   second_coin_flips = pm.Bernoulli("second_flips", 0.5, shape=N, testval=np.random.binomial(1, 0.5, N))
    val = first_coin_flips*true_answers + (1 - first_coin_flips)*second_coin_flips
    observed_proportion = pm.Deterministic("observed_proportion", tt.sum(val)/float(N))

I’m trying to implement this in pyro. On the one hand I’m not surprised it doesn’t work because it’s a pretty ugly hack. There are ways to express the same concept more succinctly but I’m curious if I can express this the same way in pyro as they do in pymc3 . What I ideally want is a pyro.sample that samples more than 1 value and creates unique names for each one but it doesn’t look like there’s an api for that. Is there a specific reason for not being able to sample more than 1 value at a time?

N = 100

n_steps = 1

data = torch.cat((torch.ones(35, 1), torch.zeros(65, 1)))

pyro.clear_param_store()
def model(data):
  cheating_frequency = pyro.sample(
     'cheating_frequency',
      dist.Uniform(0, 1)
  )
  
  true_answers = torch.tensor([ pyro.sample("true_answer_{}".format(i), dist.Bernoulli(probs=cheating_frequency) ) for i in range(100) ], requires_grad=True)
    
  first_coin_flips = torch.tensor([ pyro.sample("first_coin_flips_{}".format(i), dist.Bernoulli(probs=0.5) ) for i in range(100) ], requires_grad=True) 

  second_coin_flips = torch.tensor([ pyro.sample("second_coin_flips_{}".format(i), dist.Bernoulli(probs=0.5) ) for i in range(100) ], requires_grad=True) 

  observed_trues = first_coin_flips * true_answers + ( 1- first_coin_flips) * second_coin_flips
  
  observed_proportion = observed_trues.sum() / N
  pyro.sample("obs", dist.Binomial(probs=observed_proportion), obs=data.sum())

def guide(data):
  alpha = pyro.param('alpha', torch.tensor(10.0), constraint=constraints.positive)
  beta = pyro.param('beta', torch.tensor(10.0), constraint=constraints.positive)
  cheating_frequency = pyro.sample(
     'cheating_frequency',
      dist.Beta(alpha, beta)
  )
  true_answers = torch.tensor([ pyro.sample("true_answer_{}".format(i), dist.Bernoulli(probs=cheating_frequency) ) for i in range(100) ])
  first_coin_flips = torch.tensor([ pyro.sample("first_coin_flips_{}".format(i), dist.Bernoulli(probs=0.5) ) for i in range(100) ]) 
  second_coin_flips = torch.tensor([ pyro.sample("second_coin_flips_{}".format(i), dist.Bernoulli(probs=0.5) ) for i in range(100) ])   
  
# setup the optimizer
adam_params = {"lr": 0.0005, "betas": (0.90, 0.999)}
optimizer = Adam(adam_params)

# setup the inference algorithm
svi = SVI(model, guide, optimizer, loss=Trace_ELBO())

def print_progress(step):
  alpha = pyro.param("alpha").item()
  beta = pyro.param("beta").item()
  inferred_mean = alpha / (alpha + beta)
  factor = beta / (alpha * (1.0 + alpha + beta))
  inferred_std = inferred_mean * math.sqrt(factor)

  print("\n[step %i] based on the data and our prior belief, the probability of a student cheating is %.3f +- %.3f" % (step, inferred_mean, inferred_std))

# do gradient steps
for step in range(n_steps):
  svi.step(data)
  if step % 100 == 0:
    print_progress(step)

This prints out
“based on the data and our prior belief, the probability of a student cheating is nan ± nan”


#2

Hi @jvans,
I think that you miss the argument total_count=len(data) in your Binomial distribution.