Implementing a Probabilistic Model in Pure PyTorch (for learning)

Hello, I am new to Pyro and probabilistic programming. I just finished reading the VAE example in the docs and feel confident in writing probabilistic models in Pyro. However, I decided to implement one using PyTorch distributions for the sake of learning. I want to know if this model is right or wrong.

import torch as T
import torch.nn as nn
import torch.distributions as dist

class NN(nn.Module):
    def __init__(self):
        super().__init__()
        self.lin = nn.Linear(768, 128)
        self.fc = nn.Linear(128, 64)

    def forward(self, x):
        x = self.lin(x)
        loc = self.fc(x)
        scale = nn.functional.softplus(self.fc(x))
        return loc, scale

def model():
    # implementing p(z|x)
    x = T.rand([1, 1, 28, 28])
    net = NN()
    loc, scale = net(x)
    return dist.Normal(loc, scale).sample()
    

I think this model is not that right. You need to write another guide model to give any sampled parameters the approximated distribution for the possible posterior distribution of the parameters.

In my understanding, under pyro framework, you need two things, the model function to write the forward probabilistic model, i.e., the prior of the parameters, and the likelihood of the data under the parameters; another thing is the guide function, which gives each stochastic parameters (with prior distribution) the approximate distributions for the posterior distribution.

Sorry if I am wrong, but to my understanding, does that mean I have to sample my input data from another distribution, rather than using random data?

take a look at the pytorch example

Your data is the observation. The aim here should be the estimation of the stochastic parameters given the observation. If the observation is too large, you can still use the stochastic version of variational inference.