It might help to avoid in-place operations (I now always avoid in-place PyTorch ops):
a = a - a.mean()
If this doesn’t work, could you try pasting more code so we can see how a is being used. Another thing you could do is to sample from the n-1 dimensional space and manually inject into the n dimensional space, similar to the code in torch.distributions.constraints.
Sample from a Dirichlet distribution (with alpha equal 1 for each component if you want a uniform compositional distribution). After that just subtract 1 the resultant vector.