Hey;
I’m interested in Bayesian Hierarchical Logistic Regression.
I have predictors X
(dimension = D
) and my response variable Y={0,1}
.
I want to learned the posterior distribution of my beta
(some calls it weights or coefficients, etc) where 1/(1+exp(-X@beta)
gives me the probability of deciding 1 or 0
In term of Hierarchical model, my posterior distribution is
log_p(beta, sigma | data)
proportional to log_likelihood(data) + log_Normal(beta | 0, sigma) + log_Normal(sigma | 0, 10.)
My implementation of NUTS is
def model(X,Y):
sigma = pyro.sample('sigma', dist.Normal(torch.zeros(N), torch.ones(N)))
beta = pyro.sample('beta', dist.Normal(torch.zeros(D), torch.ones(D) * sigma ** 2) )
y = pyro.sample('y', pyro.distributions.Bernoulli(logits=(X*beta).sum(-1)), obs=Y)
nuts_kernel = NUTS(model)
mcmc_run = MCMC(nuts_kernel,
num_samples=100,
warmup_steps=100,
num_chains=1).run(X,Y) # I assume that argument in run() are the argument passed into model
marginal = mcmc_run.marginal(sites=["sigma", "beta"])
Are this correct high level implementation ?
Also I’m interesting in many other posterior distribution where I only have unnormalized log probability function and I don’t know the “model”. How do I specify in Pyro ? Just like one would do in tensor flow; such as
mcmc.kernel( target_log_prob_fn=target_log_prob_fn, # just a callable function current_state=[0.], step_size=[0.3], seed=3)