Possible bugs: Predictive vs Subsampling, Enumerate Error

In a simple LDA model, I subsample the corpus in the guide:

for doc in pyro.plate("corpus", num_docs, subsample_size=50):

        theta = pyro.sample("topic_dist_for_doc_{}".format(doc), dist.Dirichlet(alpha))

In the model, the corresponding part is:

    for doc in pyro.plate("corpus", num_docs):
            theta = pyro.sample("topic_dist_for_doc_{}".format(doc), dist.Dirichlet(alpha))
            with pyro.plate("words_{}".format(doc), num_words_per_doc[doc]):
                z = pyro.sample("topic_for_word_{}".format(doc), dist.Categorical(theta), infer={"enumerate": "parallel"} )
                pyro.sample("word_for_position_{}".format(doc), dist.Categorical(beta[z]), obs = data[doc])

When I use pyro.infer.Predictive to get the sample
It gives the error KeyError: ‘topic_dist_for_doc_137’.
I think there is a communication problem between guide and model when using Predictive function. This is probably the same issue with https://forum.pyro.ai/t/pyro-infer-predictive-with-dynamic-model-structure/1958 .
Am I doing something or is this a bug in Pyro?

Also, there is another issue apart from predictive, if I do not use subsampling (it should be subsample<63) then (I think) because of how Pyro handles tensor shapes in collapsing variables (as explained in enumeration ) in this part I get an error due to enumerate indicating pytorch does not support dim>=64

z = pyro.sample("topic_for_word_{}".format(doc), dist.Categorical(theta), infer={"enumerate": "parallel"} )

Since when model is called in SVI, in each step of for doc in pyro.plate("corpus", num_docs): it creates a growing nested list for the next “topic_for_word_{}” .
In here, I also have the same question
Am I doing something or is this a bug in Pyro?
Thanks in advance.

Hi @cprogrammer,

I’m not sure about your first problem, but I believe you can solve the second problem by declaring that there is not dependency of z variables across plates. I believe that if you do this using pyro.markov(…, window=0) then Pyro will be able to recycle tensor dimensions and shouldn’t give you the error:

# in the model code
for doc in pyro.markov(pyro.plate("corpus", num_docs, subsample_size=50), window=0):
    ...

(@eb8680_2 I would think that logically pyro.plate() would imply pyro.markov(window=0), but I suspect we did not implement that)

Let us know if that doesn’t work!

Hi @fritzo,

Thank you for your reply.

Do you mean pyro.markov(…, dim=0) instead of window = 0? If it is, it gives error indicating vectorised Markov is not implemented yet.
Also, isn’t the aim of the pyro.plate providing conditional independence so z variables should be independent across plates without using pyro.markov? Or did I miss something on pyro.plate?

I do mean window=. I think the “vectorized markov not implemented” error occurs if you use pyro.markov as a context manager rather than an iterator. In particular you have two plates (a sequential outer plate “corpus” and a vectorized inner plate “words_…”), and you’d want to use pyro.markov on the sequential outer plate.

When I checked the pyro stable document for markov, there is no argument such as window= : pyro.poutine.handlers — Pyro documentation .
For the second point, then should I also make words iterative?

Thank you.

Hi @cprogrammer so sorry, I meant pyro.markov(history=0, ...). I think I confused history and window from some other library.

For the second point, you shouldn’t need to make the words iterative, and they will be much faster as a vectorized plate.