Modelling of Remaining Time

Hello,
I’m trying to predict the remaining duration until some even occurs. The model I had in mind is very similar to the ideas presented in this paper for predicting remaining usable lifetime (RUL).

In short, a duration of the process of my interest can be modelled as follows:
T ~ LogNormal(mu, sigma)

Given an observation at time t, while the process is running, the remaining duration R(t), can be modelled using the truncated random variable T | T > t, so:
R(t) ~ ( T | T > t ) - t

And we can make a ‘predictor’ out of this random variable using its expectation and variance, that is:

y_t = E [ R(t) ] + epsilon_t 

where epsilon_t ~ Var [ R(t) ]

Now I want to use some additional data X_it which is specific to subject i at time t to improve the model’s variance epsilon_t. That is, I wish to use X_it to incorporate random effects to the model.

X_it includes time-series of d-dimensional measurements from the last 10 time units, e.g, (in matlab notations):

X_it = [x_i,t; x_i,t-1;...;x_i,t-10]

for x_i,j \in R^d.

To gradually solve this problem, I first wrote a “vanilla” neural network without any stochasticity:

class Approx(nn.Model):
    def __init(self,...)___:
        ...
    def forward(x,t):
        random = Linear(in=d2, out=1)(RNN(in=d1, out=d2)(RNN(in=d, out=d1))(x)
        baseline = tau(t)
        return ReLu(baseline + random -t )

Here, tau is the truncated log-normal expectancy.

And I get the expected results, which are not that bad. But this is merely a point wise estimate and does not have support for uncertainty.

Then I tried to follow the GitHub page in order to figure out how to use pyro in order to get what I want.
The problem is, that this code is probably written for previews pyro version, thus it is really hard to understand what they did as it is incompatible with the documentation and tutorials, for example, the guide function there returns a touple of “lifted modules” (lifted_module = pyro.random_module("some_neural_net", priors_dict)).

To my understanding, the model component in my case should by something like:

def model(x,t,y):
    pyro.sample("obs", LogNormal(mu,sigma), obs = y)

where the guide function is where the learning happens:

class ApproxVar(nn.Model):
    def __init(self,...)___:
        ...
    def forward(x,t):
        random = Linear(in=d2, out=1)(RNN(in=d1, out=d2)(RNN(in=d, out=d1))(x)
        return random 

av = ApproxVar()

def guide(x,t,y):
    # set the priors for all the parameters ... 
    pyro.module("random", av)
    re = av.forward(x,t)
    structured_var = pyro.sample("obs", LogNormal(mu,sigma))
    return structured_var + re - t

But I very much doubt that this is the way to go as it does not seem anything like the publication code nor any of the tutorials contents.

So I came to a conclusion that it is better to ask silly question than publishing silly results - so I’m looking for a reference to some better reading material (preferably accompanied with practical examples) as what I have is no longer relevant to the updated pyro documentation and I simply don’t get what they did. Even better if someone could spare the time to explain it.

And goes without saying that if I got something wrong in my idea - please let me know.

Thanks