How to use the pyro.module function

def model(self, xs, ys=None):
    # register this pytorch module and all of its sub-modules with pyro
    pyro.module("ss_vae", self)  # ME: Replace as:pyro.module("ss_vae",self.decoder*)?
      ....
def guide(self, xs, ys=None):
    # inform Pyro that the variables in the batch of xs, ys are conditionally independent
    with pyro.plate("data"):  # ME: Insert: pyro.module("ss_vae", self.encoder_z)?

def model_classify(self, xs, ys=None):
    # register all pytorch (sub)modules with pyro
    pyro.module("ss_vae", self)  # ME: Replace as: pyro.module("ss_vae", self.encoder_y)?
    with pyro.plate("data"):

def guide_classify(self, xs, ys=None):
    """
    dummy guide function to accompany model_classify in inference
    """
    pass

Can the above code for registering modules in SSVAE be changed to the code in my comment (With ‘ME’ symbol mark)?

In fact, I don’t quite understand what the specific usage of Pyro’s registered neural network to Pyro’s parameter storage(i.e. How to use the pyro.module function ). I thought that whether it is a model or a guide, if it uses nn.module ‘A’, register the nn.module ‘A’. But after I saw the SSVAE code, I completely confused myself. The model function used the decoder, but registered the ‘self’. Does the ‘self’ contain all the nn.modules of the SSVAE class, and if so, So why does model_classify have to register more than one self?

pyro.module simply registers the parameters (i.e. weights and biases) of a nn.Module with Pyro’s param store for optimization. I would suggest looking at the simpler vae.py example first which explicitly registers the encoder and decoder networks as in your proposal. Now, why this works with pyro.module("x", self) is because self is an instance of nn.Module because notice that the SSVAE class inherits from nn.Module. Because of this, any submodules that are set as attributes of the class are automatically registered as _modules due to some __setattr__ magic that happens inside the torch.nn.Module class, and all of their parameters are grabbed inside pyro.module when it registers them for optimization via nn.Module.named_parameters(). This makes it convenient to group together multiple nn.Module without having to register each of them separately with the param store.

Why does the pyro.module(“ss_vae”, self) command appear in the model function and the model_classify function, respectively. Because, as you said, it only happens once, not twice.

If aux-loss is set to True, the code inside model isn’t executed, and instead, the code inside model_classify is executed. That’s the reason it needs to be there in both the models.

OK,I see, but,

  1. I know that SSVAE has two loss, but where can I see the setting ‘aux-loss=True’?
  2. In the VAE code, the name is different for each module registered. Why is the name in the SSVAE the same, i.e. “ss_vae”, can it be changed to a different name?

When you run a python script you can specify script options via command line args, so you can use this loss by invoking your script as:

python ss_vae_M2.py --aux-loss

You can change the name, the name only governs the prefix that is appended to the parameter names in the param store.

1 Like

Thank you for your answer.:grin:

Just to make sure I understand the pyro.module function as well. So, presumably if we are doing the typical thing in pyro of defining some model and guide functions. If we wanted to be SUPER minimalist for some reason could we then do something like the following:

def guide( **kwargs):
      pyro.module("our_model", self) # register our model here
     #do our all our fancy business
def model(**kwargs):
     #it is now irrelevant to register our model as we already did it in our guide
     #do our other fancy business
svi = SVI(model, guide, elbo=Trace_ELBO()) #or whatever the proper constructor....
#some where we do some sort of training
for batch in train_batches:
      """
      our torch.nn.Module is properly updated since we at least registered in the guide"
      """
      svi.step(batch)
1 Like

assuming your networks are submodules of self then yes that is fine. this is how the ss-vae tutorial does it

1 Like