I wrote a blog post about `HiddenLayer`

Hi everyone!

I was curious how to use Pyro’s support for locally reparametrized Bayesian Neural Networks (bnn.HiddenLayer) and so I wrote a blog post about it.
I thought I would share it here, in case other people found it interesting too. :slight_smile:


Hi, I don’t know if you can help me or not. I was looking at the forum post you mentioned to create a Bayesian classifier following that example. It works, but I cannot understand the dimensions of the mean and scale tensors in the snippet below. Why is it in layers 2 and 3 we have (self.n_hidden+1, self.n_classes) rather than (self.n_hidden, self.n_hidden) and the final layer (self.n_hidden+1, …) as the pyro documentation and the dimensions of layer 1 parameters would suggest to me?

    # Set-up parameters for the distribution of weights for each layer `a<n>`
    a1_mean = torch.zeros(784, self.n_hidden)
    a1_scale = torch.ones(784, self.n_hidden) 
    a1_dropout = torch.tensor(0.25)
    a2_mean = torch.zeros(self.n_hidden + 1, self.n_classes)
    a2_scale = torch.ones(self.n_hidden + 1, self.n_hidden) 
    a2_dropout = torch.tensor(1.0)
    a3_mean = torch.zeros(self.n_hidden + 1, self.n_classes)
    a3_scale = torch.ones(self.n_hidden + 1, self.n_hidden) 
    a3_dropout = torch.tensor(1.0)
    a4_mean = torch.zeros(self.n_hidden + 1, self.n_classes)
    a4_scale = torch.ones(self.n_hidden + 1, self.n_classes)

Thanks for a great post and any help understanding completely the details of your coding.


I believe the extra dimension stands for the bias.

1 Like