Regime Switching state space model

Hey guys,

The model I’m trying to create in Pyro is based from this paper - Variational Learning for Switching State-Space Models.

Cay pyro use exact inference in any way on the discrete latent variables of a regime switching state space model, or do I have to a guide for the latent state variables (with the obvious drawback of high variance of the non-reparametizable latent discrete variables)?

Secondly (a more basic question), to do a linear state space, is there a way I can add a constant in addition to the transition probability (such as with the Ornstein-Uhlenbeck process, so that the transition of the latent continuous state variable is like x_t+1 = a + b * x_t + w, as opposed to just having the transition state matrix, so that the process is of the form x_t+1 = b * x_t + w

Mike

Hi @mike_schoehals, looking at the model in Figure 3 of the paper


I believe you’d need to variationally approximate either the continuous states X or the discrete states S, and then Pyro could marginalize out the other (with GaussianHMM for X or with DiscreteHMM for S).

Re: the linear state space question, yes you can pass GaussianHMM an observation_dist that has nonzero mean, e.g. Normal(a, w_scale).

:thinking: I think the best approach would be to:

  1. Variationally model the X’s with AutoNormal or AutoLowRankMultivariateNormal.
  2. Use a DiscreteHMM to exactly marginalize out the latent state variables. The way you’d do this is to manually construct observation_logits = Normal(predictions, scale).log_prob(obs) or some other likelihood, where predictions are the predicted outputs of all states and obs is the Y, so log P(Y|X) = Normal(f(X), scale).log_prob(Y).
  3. Optionally use a HaarReparam or DiscreteCosineReparam to get nicely time-correlated posteriors.

Thanks a ton! Let me look into this.

If anyone gets a chance, I just wanted to confirm, that this is the way to properly reparametrize an AutoGuide using the DiscreteCosineReparam:

guide = autoguide.AutoLowRankMultivariateNormal(RegimeSwitching.Attempt2.model, rank=10)
reparam_guide = poutine.reparam(guide,
                          {'_AutoLowRankMultivariateNormal_latent':DiscreteCosineReparam()})

The reason I have doubts is because the ELBO losses look exactly the same with and without the reparameterization (I used a pyro.set_rng_seed(0) before running the svi steps). I had to determine the name of the site I wanted to reparametrize using this code:

func = lambda : guide(data)
trace = poutine.trace(func).get_trace()

for name, site in trace.nodes.items():
              print(site)

One area that might be beneficial for future use (you may already be aware of this) is to generalize the GaussianHMM model, such as what has been done in Tensorflow Probability: tfp.distributions.LinearGaussianStateSpaceModel  |  TensorFlow Probability - where they have generalized the model to allow a time-specific observation/transition parameters. This might be useful where regime-switching is needed (which is the case I’m looking at).

Mike

Hi @mike_schoehals, Pyro’s GaussianHMM does indeed support heterogeneous observation/transition matrices. You simply need to add an extra dimension. See the docs.

Gotcha, thanks!