Any inference suggestions for training a HMM model with a very large observation discrete state size (e.g. around 13000)?


#1

For example, if the observation in the dataset is words, i.e. the data is articles, documents. The observation state size would be those words. But the vocabulary size is very large.

If I just use the Trace_ELBO() for training, would the performance will be stable and good enough? And will the learning be stable?


#2

If your large categorical variable is always observed, then I believe TraceEnum_ELBO should still work fine. Can you try using Pyro >= 0.3.2 so it includes PR #1831? If you still have trouble, please post some or all of your model and we can take a look at optimizing it.


#3

Hi @fritzo, thank you very much for your suggestion, I will have a try using TraceEnum_ELBO. Before this point, I used Trace_ELBO which seems not work well.