Guidance on Parameter Optimization for Physical Engineering Model

Hello,

I have an engineering model simulator, that has 10 input parameters.
I can call it from Python and do thousands of samples.

I have experimental results for specific 3 specific locations on the model. Each location itself has 3 experimental data points, each point has a measurement preciseness of ±5 degC. I assume the range of real temperatures would follow a Normal Distribution given enough data so I think using a Student T distribution would be sensible.

I would like to infer what the input parameter should probably be.

I was thinking something like this:

def model(X_, init_temp_, test_node_temps_=None):
    lbound = -1
    ubound = 1
    
    params = pyro.param("params", lambda: (lbound - ubound) * torch.rand(10)) + ubound)
    simulator = torch.matmul(params, X_) + init_temp_
   sigma = 5
    with pyro.plate("data", len(test_node_temps_)):
        return pyro.sample("obs", dist.Normal(simulator, sigma), obs=test_node_temps_)

How would the sigma of the sample be tuned? I don’t actually know if the posterior is Normal either.

Should I set the sample to be a distribution of the test data instead and take the simulator results as the observations? Something like this:

return pyro.sample("obs", dist.Normal(mean_test_node_temps_, std_test_node_temps_), obs=simulator)

Any guidance is appreciated.
I did see some work on Approximate Bayesian Computation using Sequential Monte Carlo but can’t find any documentation on how to actually implement that for a newbie.

is this literally your simulator or is it something else?