BNN with option to "freeze" weights

So i have this architecture:

class BayesianRegression(PyroModule):

    def __init__(self):

        super().__init__()

        self.linear1 = PyroModule[nn.Linear](1, HIDDEN_DIM)

        self.linear1.weight = PyroSample(dist.Normal(0., 1.).expand([HIDDEN_DIM, 1]).to_event(2))

        self.linear1.bias = PyroSample(dist.Normal(0., 10.).expand([HIDDEN_DIM]).to_event(1))

        self.linear2 = PyroModule[nn.Linear](HIDDEN_DIM,1)

        self.linear2.weight = PyroSample(dist.Normal(0., 1.).expand([1, HIDDEN_DIM]).to_event(2))

        self.linear2.bias = PyroSample(dist.Normal(0., 10.).expand([1]).to_event(1))

Is there a way to freeze the weights of the the linear layers after one sampling(like in one dropout layer at test time) in order to get constant output?

@chaps I think you can simply replace the weights by constant values:

your_module.linear1.weight = constant / nn.Parameter(constant) / ...

:smiley:

Thank you for the reply :slight_smile:

Let’s say you freeze the weight like this

frozen_weight = model.linear1.weight
model.linear1.weight = frozen_weight

How would you then return the model to sampling from a distribution for model.linear1.weight?

For my use case I would like to be able to sample a set of weights from a BNN, apply those weights for a number of prediction batches, then resample the BNN weights and repeat this process.