 # Do tricks with distribution vectors

Is it possible to estimate values from following model?

``````def generator(x):

# values to estimate
to_be_need =  3
to_remember = 4

preds = torch.zeros(len(x)).long()

for d in range(len(x)):

# priors
leads = dist.Uniform(100, 200).sample()

# likelihood

deals = needs[needs >= memos]
deals = deals[deals < len(x) - d].unique(return_counts=True)

preds[deals.long() + d] += deals

return preds

x = torch.arange(0, 100, 1)
y = generator(x)``````

I tried to build naive model, but render shows some problems with dependencies:

``````def model(x, y):

to_be_need = pyro.sample("to_be_need", dist.Uniform(1,7))
to_remember = pyro.sample("to_remember", dist.Uniform(1,7))

sigma = pyro.sample("sigma", dist.Uniform(0, 1))

preds = torch.zeros(len(x)).long()

for d in pyro.plate("days", len(x)):

needs = pyro.sample(f"needs_{d}", dist.Poisson(to_be_need*torch.ones(int(leads))))
memos = pyro.sample(f"memos_{d}", dist.Poisson(to_remember*torch.ones(int(leads))))

deals = Vindex(needs)[needs >= memos]
deals = deals[deals < len(y) - d].unique(return_counts=True)
preds[deals.long() + d] += deals

pyro.sample(f"obs_{d}", dist.Normal(preds[d], sigma), obs=y[d])``````
``pyro.render_model(model, model_args=(x, y))``

And inference cause an error:

``````mcmc = MCMC(NUTS(model), num_samples=1000, warmup_steps=200)
mcmc.run(x, y)``````
``ValueError: Continuous inference cannot handle discrete sample site 'needs_0'. Consider enumerating that variable as documented in https://pyro.ai/examples/enumeration.html . If you are already enumerating, take care to hide this site when constructing an autoguide, e.g. guide = AutoNormal(poutine.block(model, hide=['needs_0'])).``

Hint from the error advice to use enumerating, but I have no idea how to apply that

Hi @Svetozar, Pyro doesn’t have a great way to infer Poisson latent variables. I’d recommend thinking about continuous relaxation of your model. Re: `pyro.render_model()` it looks like our rendinging is being thwarted by then in-place modification of `preds`. Feel free to file a bug report with your `model()` code. Again I’d recommend trying to continuously relax your model and avoid in-place modifications if possible.

Hi @fritzo, I have a question related to your last comment. Could you please explain why and when in-place modifications should be avoided in Pyro?

1. In-place modifications can break automatic differentiation in some circumstances. This is a general PyTorch issue. This is such a difficult problem that JAX made the design choice to completely disallow in-place modifications anywhere. PyTorch is more permissive, but the rules are complex and you’ll need to see what works. A simple necessary rule is: no element of a tensor can depend on another element of the same tensor, in the sense of AD graph dependency.
2. In-place modifications break Pyro’s provenance tracking that is used in `pyro.render_model()`. This affects rendering, but also the occasional inference algorithm that uses provenance tracking, such as the `AutoGaussian` guide and the 2nd generation of TraceGraph_ELBO.
2 Likes

Thank you for answer, I’d decided better to change the model and avoid this challenge
But I still believe that I can feel how to use pyro and apply that in practice 