Training and Evaluation performance issues

I’m noticing some issues with performance when alternating between training and evaluation. Here is what I’m doing (for a VAE like model):

  • Train model for a few epochs
  • Test model using .eval() under with torch.nograd()
  • Train model some more (resetting .train())

Two observations:

  1. I’ve noticed the reconstructions can degrade
  2. The training loss increases (worse) initially than recovers after a few iterations

I’m using the latest stable versions of PyTorch and Pyro.

Are there any special commands to implement this right? Is it an issue that I’m executing this in a Jupyter Notebook?

Thanks for any help.