Kernel crashes with categorical data longer than 1000 samples

I’m trying to run a simple example with a dirichlet-categorical model, using a small synthetic dataset of 1000 samples.

Here’s my code, which runs whenever I reduce the size of the data to less than 1000 samples:

categories = [0, 1, 2]
probabilities = torch.tensor([0.8329, 0.1138, 0.0534]).numpy()

# draw 1000 samples
n = 500
data = np.random.choice(categories, n, p=probabilities)
data = torch.from_numpy(data)

pyro.clear_param_store()
def model(data):
    alphas = torch.tensor([16., 1., 1.])
    probs = pyro.sample('probs', dist.Dirichlet(alphas))

    with pyro.iarange("data_loop", len(data)):
        return pyro.sample("obs", dist.Categorical(probs), obs=data)

def guide(data):
    alphas_q = pyro.param('alpha', torch.tensor([8., 1., 1.]))
    pyro.sample("probs", dist.Dirichlet(alphas_q))

adam_params = {"lr": 0.0005, "betas": (0.90, 0.999)}
optimizer = Adam(adam_params)
svi = SVI(model, guide, optimizer, loss=Trace_ELBO())

n_steps = 20000
for step in range(n_steps):
    svi.step(data)

# After training, inspect the learned parameters
print(pyro.param("alpha"))

I can’t see a reasonable-looking error, just a message that says the kernel has crashed. I’m using VSCode.
I didn’t think that 1000 samples would be too much to use in one go, but should I be doing something to batch the data even with this tiny tensor?

Any help would be much appreciated! Thanks!