Online Learning and Input Retraction

Hi,
My interest is related online learning (using Bayesian update - posterior used as the next prior) and variable retraction (make each input missing, one at a time, and use the model to infer it).
These functionalities are somehow straight-forward for Bayesian Networks and similar graph techniques.

But, for Probabilistic Programming Languages (PPL) as Pyro, I have questions.
I understand that for MCMC we are sampling from the posterior. After, this posterior will have to be fitted to the same distribution of the prior to be reused somehow.
But, considering Pyro ADVI approximations, is the posterior distribution known and reusable?

In relation to retraction of each input, is there a way to do it without requiring rewriting/re-arranging the inference formulation/code?
Thanks.