I have a simple linear regression model in Pyro with the following code:
data
comments  commits  time  

Alice  7500  25  4.5 
Bob  10100  32  6.0 
Cole  18600  49  7.0 
Danielle  25200  66  12.0 
Erika  27500  96  18.0 
slack_comments = torch.tensor(data.comments.values)
github_commits = torch.tensor(data.commits.values)
time = torch.tensor(data.time.values)
dims={
"slack_comments": ["developer"],
"github_commits": ["developer"],
"time": ["developer"],
}
data_dict = {
"developer": N,
"time_since_joined": time
}
def model(developer, time_since_joined):
b_sigma = abs(pyro.sample('b_sigma', dist.Normal(0, 300)))
c_sigma = abs(pyro.sample('c_sigma', dist.Normal(0, 6)))
b0 = pyro.sample("b0", dist.Normal(0, 200))
b1 = pyro.sample("b1", dist.Normal(0, 200))
c0 = pyro.sample("c0", dist.Normal(0, 10))
c1 = pyro.sample("c1", dist.Normal(0, 10))
with pyro.plate('developer', developer):
slack = pyro.sample("slack_comments", dist.Normal(b0 + b1 * time_since_joined, b_sigma), obs=slack_comments)
github = pyro.sample("github_commits", dist.Normal(c0 + c1 * time_since_joined, c_sigma), obs=github_commits)
return slack, github
nuts_kernel = NUTS(model, jit_compile=True, ignore_jit_warnings=True)
mcmc = MCMC(nuts_kernel, num_samples=400, warmup_steps=400,
num_chains=4, disable_progbar=True)
mcmc.run(**data_dict)
posterior_samples = mcmc.get_samples()
posterior_predictive = Predictive(model, posterior_samples).forward(**data_dict)
prior = Predictive(model, num_samples=150).forward(**data_dict)
I have the following questions:

The
posterior_predictive
samples just seem to be the observed data called over again and again and not actually predictive samples. Why is this happening? 
The
posterior_predictive
samples have the shape of the observed data (there are 5 values passed), which makes sense, however when I want predictions on newtime
data with different shape (2 values passed) I still get the old shape of 5 values.
posterior_predictive
{'slack_comments': tensor([[ 7500, 10100, 18600, 25200, 27500],
[ 7500, 10100, 18600, 25200, 27500],
[ 7500, 10100, 18600, 25200, 27500],
...,
[ 7500, 10100, 18600, 25200, 27500],
[ 7500, 10100, 18600, 25200, 27500],
[ 7500, 10100, 18600, 25200, 27500]]),
'github_commits': tensor([[25, 32, 49, 66, 96],
[25, 32, 49, 66, 96],
[25, 32, 49, 66, 96],
...,
[25, 32, 49, 66, 96],
[25, 32, 49, 66, 96],
[25, 32, 49, 66, 96]])}
predictions_dict = {
"developer": 2,
"time_since_joined": candidate_devs_time
}
predictions = Predictive(model, posterior_samples).forward(**predictions_dict)
predictions
{'slack_comments': tensor([[ 7500, 10100, 18600, 25200, 27500],
[ 7500, 10100, 18600, 25200, 27500],
[ 7500, 10100, 18600, 25200, 27500],
...,
[ 7500, 10100, 18600, 25200, 27500],
[ 7500, 10100, 18600, 25200, 27500],
[ 7500, 10100, 18600, 25200, 27500]]),
'github_commits': tensor([[25, 32, 49, 66, 96],
[25, 32, 49, 66, 96],
[25, 32, 49, 66, 96],
...,
[25, 32, 49, 66, 96],
[25, 32, 49, 66, 96],
[25, 32, 49, 66, 96]])}
Am I doing something wrong? I had also used a guide but the results were same. I want to use the same trace/posterior samples to generate predictions (slack_comments and github_commits) data by passing new time
data.