Help with MixedMultiOptimizer error on basic example

Hi all,

I’m modelling a problem that is non-identifiable, and to infer it I need to keep some parts fixed while adjusting others others. I understand that MixedMultiOptimizer is a suitable option for this scenario, and so I endeavoured to follow the guide here:

I wasn’t able to get it to work on my problem, so I reduced the problem to a super basic coin flip example. Here is the data and the model code:

class Model(object): 
    def __init__(self, data): 
    def model(self, data): 
        p_pos = pyro.sample('p_pos', dists.Beta(1, 1))
        with pyro.plate('data_plate', len(data)):
            pyro.sample('obs', dists.Bernoulli(p_pos), obs=data)
    def guide(self, data): 
        q_a = pyro.param('q_a', torch.tensor(1.0), constraint=constraints.positive)
        q_b = pyro.param('q_b', torch.tensor(1.0), constraint=constraints.positive)
        p_pos = pyro.sample('p_pos', dists.Beta(q_a, q_b))

data = (torch.rand(N) < 0.55).float()

It’s fairly straightforward, and when I infer the parameters I get something reasonable:

model = Model(data)
optimizer = pyro.optim.Adam({"lr": 1e-2})
svi = pyro.infer.SVI(model.model,, optimizer, loss=pyro.infer.Trace_ELBO())
for _ in range(10):
    loss = svi.step(data) / len(data)

Which gives q_a=12.2312 and q_b=9.7367 which roughly equals the 0.55 that I specified.

Lovely! :slight_smile:

Adjusting this code to the MixedMultiOptimizer looked fairly straightforward:

from pyro.optim.multi import MixedMultiOptimizer


model = Model(data)

adam = pyro.optim.Adam({'lr': 0.1})
sgd = pyro.optim.SGD({'lr': 0.01})
optim = MixedMultiOptimizer([
    ([('q_a')], adam), 
    ([('q_b')], sgd)

elbo = pyro.infer.Trace_ELBO()
with pyro.poutine.trace(param_only=True) as param_capture:
    loss = elbo.differentiable_loss(
        lambda: model.model(data), 
params = {'q_a': pyro.param('q_a'), 'q_b': pyro.param('q_b')}
optim.step(loss, params)

Unfortunately, when I run the code above the following error is raised

RuntimeError: One of the differentiated Tensors appears to not have been used in the graph. Set allow_unused=True if this is the desired behavior.

I tried hacking the appropriate line in pyro/optim/ and added this argument, but when I do that another exception looms:

ValueError: can’t optimize a non-leaf Tensor

I’ve spent some time trying to figure this out, but have not been able to. I’m hoping somebody here can help out!

Thanks in advance!

Here are some versions

Python 3.8
Torch 1.6.0
Pyro 1.4.0

i think the issue is the constraints on your parameters. please try the following:

params = {'q_a': pyro.param('q_a').unconstrained(),
          'q_b': pyro.param('q_b').unconstrained()}