# Optimizing NUTS

Are there any additional parameters that can be used to make NUTS sample more efficiently?

I have problem, where the posterior is expected to have a single peak and the model evaluation is slow (I’m estimating the eigenvalues of a large matrix). A relatively simple version of our code can be found here - Cell `In[10]` in the notebook contains the `model()` . The NUTS sampler takes a long time for every iteration step. So my question is that, can the sampler be tweaked for my specific problem to achieve faster convergence?

there’s probably not much you can do very easily. your best bet would be to try to make the underlying linear algebra faster. if that’s not possible it’s plausible that a different prior choice might converge more quickly, but that may not be something you’re able to change depending on your application. i have no idea what values `cmin` and `cmax` have but if the posterior standard deviations for the different components of `c_arr` have wildly different scales and you have some a priori knowledge what those scales might be you could reparameterize `c_arr` into a coordinate system such that the posterior standard deviations are all similar (e.g. all order unity)

Thanks for the speedy response, @martinjankowiak. Here’s `cmin` and `cmax` and they’re usually between 0.1 and 1.5

``````cmin = DeviceArray([0.67379816, 0.09862067, 0.66162481, 0.09771745, 0.64221353,
0.10262811, 0.63583597, 0.10403272], dtype=float64)

cmax = DeviceArray([1.25133944, 0.18315267, 1.22873179, 0.18147526, 1.19268227,
0.19059507, 1.18083823, 0.19320363], dtype=float64)
``````

We expect the posterior to peak somewhere near `0.5*(cmin+cmax)`.
What are the recommended limits of the `Uniform` distribution? Do you think something like this is preferred?

`dist.Uniform(jnp.zeros_like(cmin), jnp.ones_like(cmax))` and then appropriately scaling them during the computation of the model?

i don’t know you’ll have to try and see. changing the prior probably won’t change much, since it looks like your posterior is pretty strongly peaked. reparameterizing `c_arr` in different coordinates might help a bit but probably not dramatically.

it’s possible that if `c_arr` is low dimensional enough you might also get improved performance if you set `dense_mass=True`. you might also try and see if you can get away with lowering 'max_tree_depth`