Hello all,

I am still building my understanding of Pyro and probabilistic programming more general so please excuse the dumb question.

I notice that memory required grows linearly with number of data points when I use parallel plate, as in model1 below. This makes sense. But the same thing seems to happen when I try to split data into slices and have a separate plate for each slice, see model2.

Is there any way to avoid memory requirement growing linearly with number of observations in MCMC? I suspect the answer is no, but I would appreciate if you could confirm and explain why that i not possible, or point to a relevant reading.

I understand I could do subsampling in SVI but, unfortunately, SVI is not suitable for my problem.

Many thanks for any help!

```
def model1(W,Z):
N = W.shape[0]
beta = numpyro.sample("beta", dist.Dirichlet(jnp.ones([Z]) * 0.1))
with numpyro.plate("W", N):
numpyro.sample("obs", dist.CategoricalProbs(beta), obs = W)
```

```
def model2(W,Z, sliceLength):
N = W.shape[0]
beta = numpyro.sample("beta", dist.Dirichlet(jnp.ones([Z]) * 0.1))
S = int(N / sliceLength)
for s in range(S):
Wslice = W[(s*sliceLength):((s+1)*sliceLength)]
with numpyro.plate("W_{}".format(s), sliceLength):
numpyro.sample("obs_{}".format(s), dist.CategoricalProbs(beta), obs = Wslice)
```