Hi everyone,

I’m exploring Bayesian Linear Regression for the first time and I’ve got some confusion about defining a model that has learnable parameters controlling the functional form of a transformation applied to other learnable parameters.

For example, I have a diminishing returns transformation applied to some parameters.

```
def diminishing_returns(spend, half_sat, shape):
return 1 / (1 + np.power(np.exp(spend / half_sat), -shape))
```

I want the shape and saturation point of that transformation to also be learnable parameters such that I could use it like

```
def model():
sigma = pyro.sample('noise', pyro.distributions.HalfNormal(scale=10))
baseline = pyro.sample('baseline', pyro.distributions.Normal(loc=0, scale=1))
half_sat = pyro.sample('half_sat', pyro.distributions.Normal(loc=0, scale=1))
shape = pyro.sample('shape', pyro.distributions.Normal(loc=0, scale=1))
channel = pyro.sample(channel, pyro.distributions.HalfNormal(scale=1)) * diminishing_returns(feature[:, "channel"], half_sat, shape)
mean = baseline + channel
with pyro.plate('data', len(train_features.index)):
pyro.sample('obs', pyro.distributions.Normal(loc=mean, scale=sigma), obs=torch.from_numpy(train_labels.to_numpy()))
```

How can I adjust my diminishing returns function to make this work? Are there any references that address a similar problem?