I am currently using a viterbi_decoder function to infer the discrete states of my HMM. My code is similar to the function given in the infer discrete documentation.

```
@infer_discrete(first_available_dim=-1, temperature=0)
@config_enumerate
def viterbi_decoder(data, hidden_dim=10):
transition = 0.3 / hidden_dim + 0.7 * torch.eye(hidden_dim)
means = torch.arange(float(hidden_dim))
states = [0]
for t in pyro.markov(range(len(data))):
states.append(pyro.sample("states_{}".format(t),
dist.Categorical(transition[states[-1]])))
pyro.sample("obs_{}".format(t),
dist.Normal(means[states[-1]], 1.),
obs=data[t])
return states # returns maximum likelihood states
```

Is there any way to change up `viterbi_decoder()`

so that I can extract the posterior marginal probabilities for each `t`

in the data? My current ideas are to use the Marginals class or to edit the _sample_posterior() function so that it returns `log_probs`

after performing forward-backward or Viterbi-like MAP; however, writing viterbi/forward-backward algo from scratch myself seems more straightforward than these two hacks. Any advice for keeping this all in pyro?

Best,

Adam