# What exactly does TracePredictive do?

Hey guys.

I’m unsure about what exactly ‘’‘TracePredictive ( model , posterior , num_samples )’’’ does.

Does it allow us to sample from the posterior predictive distribution?

where the posterior predictive distribution is what you obtain by marginalizing the likelihood of the data (given the weights of the model) over the posterior distribution of the weights (obtained by using TracePosterior)

Could someone please clarify?

@NedimB, yes, it does that job! First, if will collect traces when you call the method .run(new_data). To get samples, you need to call the method .marginal().support. You can take a look at these tests to see how to use this class for prediction. A more detailed example can be found in bayesian regression tutorial.

@fehiepsi Ah I see, thanks

Another semi related question: when obtaining a trace from the posterior of the model coefficients using MCMC(), why can we only get the marginal from it? Shouldn’t it contain samples from a joint probability distribution ? If so, why can we not access it?

Hi @NedimB, that’s a great question! For the next Pyro release, mcmc will hold samples instead of traces. For the current version, you have to call ‘marginal()’ method to get samples from traces. Holding samples will save lots of memory for many situations and can improve speed of mcmc a bit.

@fehiepsi haha okay I am confused now. What exactly is the difference between samples and traces? I have tried to google it but no success

I’m sorry for the questions

@NedimB You can find document of trace here. Basically, trace holds many things including sample values and prior infos:

Values of trace.nodes are dictionaries of node metadata:

>>> trace.nodes["z"]  # doctest: +SKIP
{'type': 'sample', 'name': 'z', 'is_observed': False,
'fn': Normal(), 'value': tensor(0.6480), 'args': (), 'kwargs': {},
'infer': {}, 'scale': 1.0, 'cond_indep_stack': (),
'done': True, 'stop': False, 'continuation': None}
`