Questions about Bayesian optimization tutorial

Hello, all.
I’m running Pyro tutorial of Bayesian optimization.
https://github.com/pyro-ppl/pyro/blob/dev/tutorial/source/bo.ipynb

Among the steps, it is really hard for me to understand the code given below.

def find_a_candidate(x_init, lower_bound=0, upper_bound=1):
    # transform x to an unconstrained domain
    constraint = constraints.interval(lower_bound, upper_bound)
    unconstrained_x_init = transform_to(constraint).inv(x_init)
    unconstrained_x = unconstrained_x_init.clone().detach().requires_grad_(True)
    minimizer = optim.LBFGS([unconstrained_x], line_search_fn='strong_wolfe')

    def closure():
        minimizer.zero_grad()
        x = transform_to(constraint)(unconstrained_x)
        y = lower_confidence_bound(x)
        autograd.backward(unconstrained_x, autograd.grad(y, unconstrained_x))
        return y

    minimizer.step(closure)
    # after finding a candidate in the unconstrained domain, 
    # convert it back to original domain.
    x = transform_to(constraint)(unconstrained_x)
    return x.detach()

Especially this part.

constraint = constraints.interval(lower_bound, upper_bound)
unconstrained_x_init = transform_to(constraint).inv(x_init)

“transform_to(constraint)” seems like [lower_bound, upper_bound) → (-inf, inf).
But why do we need “.inv(x_init)” right after the transformation?

Thanks for reading.

Hi @Minsoo, transform_to(constraint) converts (-inf, inf) to [lower_bound, upper_bound). I think you can get more details in PyTorch docs or play with actual code. :slight_smile: