Once again, I’m working on the model described in AutoDiagonalNormal found no latent variables; Use an empty guide instead - Pyro Discussion Forum.
I’m looking for the fastest ways to get point estimates like MAE and MSE on the validation set during training. My evaluation function currently is shown below. I use Predictive
to generate samples from posterior and take their mean to calculate MAE and MSE. I know I can parallelize Predictive but I’m looking for a fast alternative (as I still need to learn how to update my code to properly run Predictive in parallel). My thinking is that I can take the MAP estimate for all of my parameters to get a point value prediction for each observation. But how exactly can I do that? Do I need to declare a separate AutoDelta
guide and use that?
def evaluate(guide, model, criterion, val_dataloader_iter, validation_steps, device, metric_agg_fn=None):
model.eval() # Set model to evaluate mode
predictive_obs = Predictive(model, guide=guide, num_samples=200, return_sites = ['obs'])
# statistics
running_mae_loss = 0.0
running_mse_loss = 0.0
running_elbo = 0.0
# Iterate over all the validation data.
for step in range(validation_steps):
pd_batch = next(val_dataloader_iter)
pd_batch['features'] = torch.transpose(torch.stack([pd_batch[x] for x in x_feat]), 0, 1)#.double()
inputs = pd_batch['features'].to(device)
labels = pd_batch[y_name].to(device)
samples_obs = predictive_obs(inputs)
mae_loss = torch.absolute(torch.mean(samples_obs['obs'], dim = 0) - labels).mean()
mse_loss = torch.pow(torch.mean(samples_obs['obs'], dim = 0) - labels, 2).mean()
running_mae_loss += mae_loss
running_mse_loss += mse_loss
running_elbo += svi.evaluate_loss(inputs, labels)#elbo_loss.loss(model, guide, inputs, labels)
# The losses are averaged across observations for each minibatch.
epoch_mae_loss = running_mae_loss / validation_steps
epoch_mse_loss = running_mse_loss / validation_steps
elbo_per_life = running_elbo / df_val.count()
return epoch_mae_loss, epoch_mse_loss, elbo_per_life