Error using irange

I’m playing with this simple model:

def genData():
    priceNow=np.random.randn(10)*1.0+15.0
    return torch.tensor(priceNow)

def model(data):
    mu=pyro.sample("mu",Normal(loc=torch.tensor(0.0),scale=torch.tensor(1.0)))
    for i in pyro.irange("my_range",10):
        pyro.sample("data_{}".format(i),Normal(loc=mu,scale=torch.tensor(1.0)),obs=data[i])

data=genData()
kernel=NUTS(model,adapt_step_size=True)
sampler=MCMC(kernel,num_samples=100,warmup_steps=500)
marginal=pyro.infer.EmpiricalMarginal(sampler.run(data),sites=["mu"])

but I get an error (I may post the whole error trace if needed)

AttributeError: ‘_Subsample’ object has no attribute ‘support’

If I use range in place of irange everything works.
I’m sure I’m doing something wrong but after trying all possible variants I’m stuck.
Thank You

I think that this is currently fixed in the dev branch of Pyro. Until recently, HMC was not making use of conditional independence information from iarange or irange (we are using the former now to enumerate out discrete latent variables).

For your use case though, you don’t need irange or iarange, and you can rely on tensor broadcasting, which will be much faster. The following should work fine:

def genData():
    priceNow = torch.randn(10)*1.0+15.0
    return torch.tensor(priceNow)

def model(data):
    mu = pyro.sample("mu", Normal(loc=torch.tensor(0.0), scale=torch.tensor(1.0)))
    pyro.sample("data", Normal(loc=mu, scale=torch.tensor(1.0)), obs=data)

Thank You for the clarification @neerajprad.
The whole model was slightly more complicated and that was the reasons for using irange:

Data:

def genData():
x0=np.random.randn(10)*1.0+15.0
x1=np.random.randn(10)*1.0+20.0
x2=np.random.randn(10)*1.0+10.0
x3=np.random.randn(10)*1.0+0.0
x4=np.random.randn(10)*1.0+20.0
x5=np.random.randn(10)*1.0+0.0
return torch.tensor([ [x0[i],x1[i],x2[i],3[i],x4[i],x5[i]] for i in range(10)])

First model attempt using irange:

def model(data):
a_mu=pyro.sample(“a_mu”,Normal(loc=0.0,scale=1.0))
a_sigma=pyro.sample(“a_sigma”,Normal(loc=0.0,scale=1.0))

b_mu=pyro.sample("b_mu",Normal(loc=0.0,scale=1.0))
b_sigma=pyro.sample("b_sigma",Normal(loc=0.0,scale=1.0))

for i in irange("range",data.shape[0]):
    x1_x0=data[i][1]-data[i][0]
    x0=data[i][0]
    x1=data[i][1]
    x2=data[i][2]
    x3=data[i][3]
    x4=data[i][4]
    x5=data[i][5]
    pyro.sample(f"data_{i}",Normal(loc=(x4-x0)*a_mu + (x5-x0)*a_sigma+(x2-x0)*b_mu+(x3-x0)*b_sigma,scale=1.0),obs=x1_x0)

Model expressed as matrix (after re-arranging the equation):

def model_V2(data):
a_mu=pyro.sample(“a_mu”,Normal(loc=0.0,scale=10.0))
a_sigma=pyro.sample(“a_sigma”,Normal(loc=0.0,scale=10.0))

b_mu=pyro.sample("b_mu",Normal(loc=0.0,scale=10.0))
b_sigma=pyro.sample("b_sigma",Normal(loc=0.0,scale=10.0))

y=torch.matmul(data,torch.tensor([-a_mu-a_sigma-b_mu-b_sigma+1,-1,b_mu,b_sigma,a_mu,a_sigma]))
pyro.sample("data",Normal(loc=y, scale=torch.ones(10)),obs=torch.zeros(10))

I believe model and model_V2 are equivalent (I may have done some mistake re-arranging the equation but conceptually they should be equal).
Are my assumption correct?
This version anyway seems much slower.

Going back to Your comment (just for my understanding) I guess I can re-write Your version using expand_by (that to me seems more readable):

def model(data):
mu = pyro.sample(“mu”, Normal(loc=torch.tensor(0.0), scale=torch.tensor(1.0)))
pyro.sample(“data”, Normal(loc=mu, scale=torch.tensor(1.0)).expand_by([10]), obs=data)

Again: I’m I right?
Thank You for Your time.

@neerajprad,
Just for fun I installed dev branch from github and build from source, but still get the same error when I try to use irange.

Just for fun I installed dev branch from github and build from source, but still get the same error when I try to use irange.

I can confirm that this works fine on the dev branch. Can you see the output of pip freeze to see that you are not picking up some other version of pyro instead?

Going back to Your comment (just for my understanding) I guess I can re-write Your version using expand_by (that to me seems more readable)

Sure, it will be the same. In this case, it really does not matter. You should be able to compute log pdf of a larger batch of data provided that the values to be scored are broadcastable with the samples from the distribution instance. e.g. torch.distributions.Normal(0., 1.).log_prob(torch.zeros(10)) is a valid operation, and will yield the same result as torch.distributions.Normal(torch.zeros(10), torch.ones(10)).log_prob(torch.zeros(10)).

I believe model and model_V2 are equivalent (I may have done some mistake re-arranging the equation but conceptually they should be equal).

I haven’t checked all the details but they seem equivalent. Is this version slower than the one with irange?

@neerajprad,
You were right: for some reasons I was still using the pip installed version of Pyro. With the Dev version I don’t get the error anymore.
As for the speed it was just an impression. I’ll make some time measures.
Once again Thank You for Your very useful explanations and for Your time.