MNIST concerns

Basically, I ran the tutorial and it all worked fine.

The model is a single-layer MLP with a hidden size of 1000 and standard normal priors on both linear layers.

The guide is more or less the same.

The final accuracy on MNIST turns out to be 89%, which is significantly less than the 98% you can get doing MAP.

I’m wondering if the tutorial / I am doing something wrong.

Cheers
Laksh

please search this forum for “bayesian neural networks” and read those posts, for example this post or this post.

that medium post actually contains some regrettable technical errors.

Hi @martinjankowiak

Thanks for your response.

I will check out the two links you set.

May I ask, what are the technical errors in the post? It may be worth following up with the author, as that is quite a popular post and probably one of the first people arrive on when learning about Pyro (like me).

Cheers,
Laksh

from what i recall subsampling (and thus scaling of KL divergences) wasn’t being handled properly.

from what i recall this post seemed to be doing things correctly:

@martinjankowiak Thank you for the rapid response. I will try out that one and report back if I find luck.

Cheers,
Laksh

Hi @martinjankowiak,

I tried to run the code in the post that you referred to. But I think this code was written using version 0.3. I currently have version 1.3.1. When I ran the code (without any changes), I get an error for the SVI step function (called from the infer_parameters function) which says:

AttributeError: ‘HiddenLayer’ object has no attribute ‘_batch_shape’

I have been struggling to find what changed between version 0.3 and 1.3.1 which causes this issue. I would appreciate any help.

The code that I am using can be seen in this github notebook.

Here is the complete stacktrace:

AttributeError Traceback (most recent call last)
in
1 pyro.clear_param_store()
2 bayesnn = BNN()
----> 3 bayesnn.infer_parameters(train_loader, num_epochs=30, lr=0.002)

in infer_parameters(self, loader, lr, momentum, num_epochs)
100 correct = 0.0
101 for images, labels in loader:
→ 102 loss = svi.step(images.cuda(), labels.cuda(), kl_factor=kl_factor)
103 pred = self.forward(images.cuda(), n_samples=1).mean(0)
104 total_loss += loss / len(loader.dataset)

~/venv/venv-rl/lib/python3.6/site-packages/pyro/infer/svi.py in step(self, *args, **kwargs)
126 # get loss and compute gradients
127 with poutine.trace(param_only=True) as param_capture:
→ 128 loss = self.loss_and_grads(self.model, self.guide, *args, **kwargs)
129
130 params = set(site[“value”].unconstrained()

~/venv/venv-rl/lib/python3.6/site-packages/pyro/infer/trace_elbo.py in loss_and_grads(self, model, guide, *args, **kwargs)
124 loss = 0.0
125 # grab a trace from the generator
→ 126 for model_trace, guide_trace in self._get_traces(model, guide, args, kwargs):
127 loss_particle, surrogate_loss_particle = self._differentiable_loss_particle(model_trace, guide_trace)
128 loss += loss_particle / self.num_particles

~/venv/venv-rl/lib/python3.6/site-packages/pyro/infer/elbo.py in _get_traces(self, model, guide, args, kwargs)
168 else:
169 for i in range(self.num_particles):
→ 170 yield self._get_trace(model, guide, args, kwargs)

~/venv/venv-rl/lib/python3.6/site-packages/pyro/infer/trace_mean_field_elbo.py in _get_trace(self, model, guide, args, kwargs)
66 def _get_trace(self, model, guide, args, kwargs):
67 model_trace, guide_trace = super()._get_trace(
—> 68 model, guide, args, kwargs)
69 if is_validation_enabled():
70 _check_mean_field_requirement(model_trace, guide_trace)

~/venv/venv-rl/lib/python3.6/site-packages/pyro/infer/trace_elbo.py in _get_trace(self, model, guide, args, kwargs)
51 “”"
52 model_trace, guide_trace = get_importance_trace(
—> 53 “flat”, self.max_plate_nesting, model, guide, args, kwargs)
54 if is_validation_enabled():
55 check_if_enumerated(guide_trace)

~/venv/venv-rl/lib/python3.6/site-packages/pyro/infer/enum.py in get_importance_trace(graph_type, max_plate_nesting, model, guide, args, kwargs, detach)
48 graph_type=graph_type).get_trace(*args, **kwargs)
49 if is_validation_enabled():
—> 50 check_model_guide_match(model_trace, guide_trace, max_plate_nesting)
51
52 guide_trace = prune_subsample_sites(guide_trace)

~/venv/venv-rl/lib/python3.6/site-packages/pyro/util.py in check_model_guide_match(model_trace, guide_trace, max_plate_nesting)
221
222 if hasattr(model_site[“fn”], “shape”) and hasattr(guide_site[“fn”], “shape”):
→ 223 model_shape = model_site[“fn”].shape(*model_site[“args”], **model_site[“kwargs”])
224 guide_shape = guide_site[“fn”].shape(*guide_site[“args”], **guide_site[“kwargs”])
225 if model_shape == guide_shape:

~/venv/venv-rl/lib/python3.6/site-packages/pyro/distributions/torch_distribution.py in shape(self, sample_shape)
67 :rtype: torch.Size
68 “”"
—> 69 return sample_shape + self.batch_shape + self.event_shape
70
71 def expand(self, batch_shape, _instance=None):

~/venv/venv-rl/lib/python3.6/site-packages/torch/distributions/distribution.py in batch_shape(self)
63 Returns the shape over which parameters are batched.
64 “”"
—> 65 return self._batch_shape
66
67 @property
AttributeError: ‘HiddenLayer’ object has no attribute ‘_batch_shape’

1 Like

I ran into the exact same issue, any update on what to do?

we currently recommend using tyxe for bayesian neural networks in pyro.

we do not recommend trying to replicate random medium posts.

1 Like