Hi, we have experimental observations where it is physically impossible to have a “lengthscale” below a certain value. However, when I try to add this limit to kernel priors,

```
kernel.set_prior(
"lengthscale",
dist.Uniform(
torch.tensor(lscale[0]),
torch.tensor(lscale[1])
).to_event()
)
```

, where lscale = [5., 20.] for 1d or lscale = [[5., 5., 5.,], [20., 20., 20.]] for 3d, and then run SparseGPRegression with this kernel, it doesn’t optimize the lengthscale parameter during the SVI steps. It just stays at lscale[0] value (while amplitude and noise get optimized). If I change the lower limit to any value below 1., something like .99, then it starts optimizing the lengthscale parameter as well. I tried it with RBF, Rational Quadratic and Matern kernels and it is always the same. I am wondering how then I can put a lower limit on the lengthscale parameter (which is based on my knowledge about the physical system that I study) ? Thanks!