DMM/HMM Example for Continuous Data

Are there any examples of using DMM/HMM for continuous data around? I can see that it should be a simple modification for the DMM example but I’m quite unsure where to start.

I’d be particularly interested in seeing the simple change to the emitter to change the DMM example to continuous.

@swwolf assuming you’re ok with gaussian observation noise you’d do something like this to modify the DMM example.

in the Emitter you need forward to return both mean and scale parameters as needed for a Normal distribution; something like:

mean = self.lin_hidden_to_mean(h2)
scale = self.lin_hidden_to_log_scale(h2).exp()
return mean, scale

then in the observation sample statement in the model you’d need to use a Normal distribution instead of a Bernoulli distribution.

@martinjankowiak

Thank you! I actually got to this solution after a little bit of stumbling.

On that, I’ve looked at the git issues regarding using viterbi to find the best path through a data set post training. Is this feasible? I’ve done what I was looking for in hmmlearn/pomegranate directly but am failing to get places here.

@swwolf i think you can find some hints on the forum (search for viterbi etc); also see e.g. here

Thank you! These are the notes I’ve been looking at – I think I’m being a bit dense here but I simply don’t see the integration of the DMM example here. As in, how do I even feed the model in here?

well the DMM example doesn’t have discrete latent variables so none of that is relevant.

for the DMM the learned guide in the tutorial is mean field gaussian. so the mean of that distribution is the mode and gives you the analog of viterbi in this setting

That makes more sense here – thank you!

With that, I am still unsure how to extract the guide into a form that I can get to the distribution and use the mode to infer the analog of viterbi.

I’m also a bit confused as to how I would even input new data to apply a trained model to. I’m assuming I can use the same poly.get_mini_batch() used to prepare training data and then apply accordingly but since I’m strugging to extract the more basic info, this is a blocker

Thank you again for the help here!

Another note, --jit doesnt seem to work with the current DMM example(could by my mistake).

yes, all data needs to be preprocessed in the same way.

note that the guide is just a function. you can call it like dmm.guide(mini_batch, ...)

in particular you can “trace” the guide and then inspect all the latent variables (“z_1”, …):

guide_trace = poutine.trace(dmm.guide).get_trace(mini_batch, ...)

(see codebase/tutorials for patterns of this kind)

alternatively (this is probably easier) you can modify the guide to directly return the latents with a python return statement.

This is perfect! I got the main set of data I’m interested by following your directions!

Upon returning a list of the z_dist from the guide, I can use the means of the z_dists as the analog for viterbi here just as you said.

In line with those questions, is there an example of extracting the discrete latent variables from HMMs generated with pyro? #1802 seems to request it but the infer_discrete example is a bit too abstract when just getting adjusted to pyro.

Inference with Discrete Latent Variables — Pyro Tutorials 1.8.4 documentation seems like the best off the shelf example that I can find.