I am reaching the limits of my 24GB GPU by running SVI on some high-dimensional input data.
The model contains a non-linear Decoder, and a linear encoder, that involves much broadcasting, external input data, etc.
Therefore I am wondering if it makes sense to switch to MCMC using NUTS. Does anyone here have any experience/insights if this will reduce my GPU memory consumption?
Thanks in advance!