TypeError: guide() got multiple values for argument 'index'

Hello @eb8680_2
I am running the code for boosting following is my guide function.

Blockquote
def guide(X_data, index):
print(X_data.shape)
features = X_data.shape[1]
X_data = X_data.view(-1, features)
n_X_data = X_data.size(0)
a1_mean = pyro.param(‘a1_mean_{}’.format(index), 0.01 * torch.randn(n_X_data, 128))
a1_scale = pyro.param(‘a1_scale_{}’.format(index), 0.1 * torch.ones(n_X_data, 128),
constraint=constraints.greater_than(0.01))
a1_dropout = pyro.param(‘a1_dropout_{}’.format(index), torch.tensor(0.25),
constraint=constraints.interval(0.1, 1.0))
a2_mean = pyro.param(‘a2_mean_{}’.format(index), 0.01 * torch.randn(128 + 1, 128))
a2_scale = pyro.param(‘a2_scale_{}’.format(index), 0.1 * torch.ones(128 + 1, 128),
constraint=constraints.greater_than(0.01))
a2_dropout = pyro.param(‘a2_dropout_{}’.format(index), torch.tensor(1.0),
constraint=constraints.interval(0.1, 1.0))
a3_mean = pyro.param(‘a3_mean_{}’.format(index), 0.01 * torch.randn(128 + 1, 128))
a3_scale = pyro.param(‘a3_scale_{}’.format(index), 0.1 * torch.ones(128 + 1, 128),
constraint=constraints.greater_than(0.01))
a3_dropout = pyro.param(‘a3_dropout_{}’.format(index), torch.tensor(1.0),
constraint=constraints.interval(0.1, 1.0))
a4_mean = pyro.param(‘a4_mean_{}’.format(index), 0.01 * torch.randn(128 + 1, 2))
a4_scale = pyro.param(‘a4_scale_{}’.format(index), 0.1 * torch.ones(128 + 1, 2),
constraint=constraints.greater_than(0.01))
# Sample latent values using the variational parameters that are set-up above.
# Notice how there is no conditioning on labels in the guide!
with pyro.plate(‘data’, size=n_X_data):
h1 = pyro.sample(‘h1’, hnn(X_data, a1_mean, a1_scale,
non_linearity=nnf.leaky_relu,
KL_factor=kl_factor))
h2 = pyro.sample(‘h2’, hnn(h1, a2_mean, a2_scale,
non_linearity=nnf.leaky_relu,
KL_factor=kl_factor))
h3 = pyro.sample(‘h3’, hnn(h2, a3_mean, a3_scale,
non_linearity=nnf.leaky_relu,
KL_factor=kl_factor))
logits = pyro.sample(‘logits’, hnn(h3, a4_mean, a4_scale,
non_linearity=lambda x: torch.sigmoid(x),
KL_factor=kl_factor,
include_hidden_bias=False))

I am getting following error

Blockquote
Traceback (most recent call last):
File “New_boosting.py”, line 280, in
boosting_bbvi()
File “New_boosting.py”, line 238, in boosting_bbvi
loss = svi.step(data,labels, approximation=wrapped_approximation)
File “/home/pranav/.local/lib/python3.6/site-packages/pyro/infer/svi.py”, line 99, in step
loss = self.loss_and_grads(self.model, self.guide, *args, **kwargs)
File “/home/pranav/.local/lib/python3.6/site-packages/pyro/infer/svi.py”, line 58, in _loss_and_grads
loss_val = loss(*args, **kwargs)
File “New_boosting.py”, line 187, in relbo
loss_fn = elbo.differentiable_loss(model, traced_guide, *args, **kwargs)
File “/home/pranav/.local/lib/python3.6/site-packages/pyro/infer/trace_elbo.py”, line 108, in differentiable_loss
for model_trace, guide_trace in self._get_traces(model, guide, *args, **kwargs):
File “/home/pranav/.local/lib/python3.6/site-packages/pyro/infer/elbo.py”, line 168, in _get_traces
yield self._get_trace(model, guide, *args, **kwargs)
File “/home/pranav/.local/lib/python3.6/site-packages/pyro/infer/trace_elbo.py”, line 52, in _get_trace
“flat”, self.max_plate_nesting, model, guide, *args, **kwargs)
File “/home/pranav/.local/lib/python3.6/site-packages/pyro/infer/enum.py”, line 42, in get_importance_trace
guide_trace = poutine.trace(guide, graph_type=graph_type).get_trace(*args, **kwargs)
File “/home/pranav/.local/lib/python3.6/site-packages/pyro/poutine/trace_messenger.py”, line 169, in get_trace
self(*args, **kwargs)
File “/home/pranav/.local/lib/python3.6/site-packages/pyro/poutine/trace_messenger.py”, line 147, in call
ret = self.fn(*args, **kwargs)
File “/home/pranav/.local/lib/python3.6/site-packages/pyro/poutine/trace_messenger.py”, line 147, in call
ret = self.fn(*args, **kwargs)
TypeError: guide() got multiple values for argument ‘index’

It looks from the error message like your model and guide functions don’t take the same arguments - specifically, approximation seems to be missing from the guide, since you’re passing it to svi.step.

Hi @eb8680_2 I am following this boosting example, here we have index argument extra in guide and I have done the same.

Hi @eb8680_2 can you help me?

It’s hard for us to be helpful without a clear, complete description of the problem and runnable, correctly formatted/indented code. Do you mean that the BBVI example on the Pyro web page raises an error when you try to run it without any modifications, or have you made changes that cause it to fail? If you mean the latter, please provide a small, runnable code snippet that reproduces the error.

Hi @eb8680_2 sorry for such late reply, what I want to do is make a binary classification code using boosting. I have done the implementation of binary classification using pyro (bnn.HiddenLayer) it is working fine. now I want to use boosting for same binary classification. The problem that I am facing is that this post of boosting example does not take into account the label part (which is important for classification). I want to know if it is possible to add labels to boosting code or not. if it is possible then can you please guide me where can I get information about adding labels to the boosting code. Thank you.

Sorry, I’m still not exactly sure what you have in mind. Can you provide a generative model that you want to use as a replacement for the model function in the BBVI tutorial, or provide the model you used in your non-boosting version and explain how you want to change it?

Hi @eb8680_2 Sure I will try to be more elaborate below is the non-boosting model and guide I am using. I would like to use same in the boosting code. Now the thing that I am not sure of is How the “labels” parameter in the model can be accommodated in boosting code in the example code there is no labels

Blockquote
def model(self, X_data, labels=None, kl_factor=1.0):
features = X_data.shape[1]
X_data = X_data.view(-1, features)
n_X_data = X_data.size(0)
# Set-up parameters for the distribution of weights for each layer a<n>
a1_mean = torch.zeros(self.inputsize, self.n_hidden)
a1_scale = torch.ones(self.inputsize, self.n_hidden)
a1_dropout = torch.tensor(0.25)
a2_mean = torch.zeros(self.n_hidden + 1, self.n_hidden)
a2_scale = torch.ones(self.n_hidden + 1, self.n_hidden)
a2_dropout = torch.tensor(1.0)
a3_mean = torch.zeros(self.n_hidden + 1, self.n_hidden)
a3_scale = torch.ones(self.n_hidden + 1, self.n_hidden)
a3_dropout = torch.tensor(1.0)
a4_mean = torch.zeros(self.n_hidden + 1, self.n_classes)
a4_scale = torch.ones(self.n_hidden + 1, self.n_classes)
# Mark batched calculations to be conditionally independent given parameters using plate
with pyro.plate(‘data’, size=n_X_data):
# Sample first hidden layer
h1 = pyro.sample(‘h1’, hnn(X_data, a1_mean, a1_dropout * a1_scale,
non_linearity=nnf.leaky_relu,
KL_factor=kl_factor))
# Sample second hidden layer
h2 = pyro.sample(‘h2’, hnn(h1, a2_mean, a2_dropout * a2_scale,
non_linearity=nnf.leaky_relu,
KL_factor=kl_factor))
# Sample third hidden layer
h3 = pyro.sample(‘h3’, hnn(h2, a3_mean, a3_dropout * a3_scale,
non_linearity=nnf.leaky_relu,
KL_factor=kl_factor))
# Sample output logits
logits = pyro.sample(‘logits’, hnn(h3, a4_mean, a4_scale,
non_linearity=lambda x: torch.sigmoid(x,dim=-1),
KL_factor=kl_factor,
include_hidden_bias=False))
# One-hot encode labels
labels = nnf.one_hot(labels.to(torch.int64)) if labels is not None else None
return pyro.sample(‘lable’,dist.OneHotCategorical(logits = logits) , obs = labels)

def guide(self, X_data, labels=None, kl_factor=1.0):
    #print("guide")
    features = X_data.shape[1]
    X_data = X_data.view(-1, features)
    n_X_data = X_data.size(0)
    print(X_data.shape,"shape of you")
    print(X_data.shape[1],"shape")
    print(n_X_data,"N_x_data")
    a1_mean = pyro.param('a1_mean', 0.01 * torch.randn(self.inputsize, self.n_hidden))
    a1_scale = pyro.param('a1_scale', 0.1 * torch.ones(self.inputsize, self.n_hidden),
                          constraint=constraints.greater_than(0.01))
    a1_dropout = pyro.param('a1_dropout', torch.tensor(0.25),
                            constraint=constraints.interval(0.1, 1.0))
    a2_mean = pyro.param('a2_mean', 0.01 * torch.randn(self.n_hidden + 1, self.n_hidden))
    a2_scale = pyro.param('a2_scale', 0.1 * torch.ones(self.n_hidden + 1, self.n_hidden),
                          constraint=constraints.greater_than(0.01))
    a2_dropout = pyro.param('a2_dropout', torch.tensor(1.0),
                            constraint=constraints.interval(0.1, 1.0))
    a3_mean = pyro.param('a3_mean', 0.01 * torch.randn(self.n_hidden + 1, self.n_hidden))
    a3_scale = pyro.param('a3_scale', 0.1 * torch.ones(self.n_hidden + 1, self.n_hidden),
                          constraint=constraints.greater_than(0.01))
    a3_dropout = pyro.param('a3_dropout', torch.tensor(1.0),
                            constraint=constraints.interval(0.1, 1.0))
    a4_mean = pyro.param('a4_mean', 0.01 * torch.randn(self.n_hidden + 1, self.n_classes))
    a4_scale = pyro.param('a4_scale', 0.1 * torch.ones(self.n_hidden + 1, self.n_classes),
                          constraint=constraints.greater_than(0.01))
    with pyro.plate('data', size=n_X_data):
        h1 = pyro.sample('h1', hnn(X_data, a1_mean, a1_dropout * a1_scale,
                                               non_linearity=nnf.leaky_relu,
                                               KL_factor=kl_factor))
        h2 = pyro.sample('h2', hnn(h1, a2_mean, a2_dropout * a2_scale,
                                               non_linearity=nnf.leaky_relu,
                                               KL_factor=kl_factor))
        h3 = pyro.sample('h3', hnn(h2, a3_mean, a3_dropout * a3_scale,
                                               non_linearity=nnf.leaky_relu,
                                               KL_factor=kl_factor))
        logits = pyro.sample('logits', hnn(h3, a4_mean, a4_scale,
                                                       non_linearity=lambda x: torch.sigmoid(x),
                                                       KL_factor=kl_factor,
                                                       include_hidden_bias=False))

Hi @eb8680_2 sorry for bothering you but did you get a chance to take a look?

Your model looks OK to me. As described in this section of the tutorial, you’ll need to update your guide to take an index argument and apply it to each guide parameter, and there are some problem-specific details you’ll need to change to reflect your model like the list of random variables in expose=['z'] in relbo and the arguments to model, guide, and approximation. I don’t see why the "lable" site in your model should affect the applicability of the BBVI code otherwise.

If you’re still having trouble after making those changes, please try to provide a clear, complete description of the problem and runnable, correctly formatted/indented code.

Hi Thank you, I was able to solve the error.Thank you for your help.

1 Like