Choice of the likehood density in Bayesian Hierarchical Linear Regression by Carlos Souza

The tutorial is very much helpful and interesting. It tries to challenge a contest problem in kaggle.com, and I am really fascinated by looking at a real use of full Bayesian inference in the field.

In the modeling, the likelihood function is chosen to be a Gaussian, which seems just natural. Then the evaluation is done through a modified Laplace log-likelihood (double exponential distribution) according to the rule of competition.

After reading the tutorial a few times with several experiments, an idea came. It can be a choice using the Laplace distribution as the observation density instead of Gaussian.

I ran the tutorial code after modifying Normal to Laplace, and ended up with a slightly higher (improved) Laplace log-likelihood.
Originally it was -6.1375, and -6.05 after modification.

It could be just a small change, or it could be within the range of natural variation due to MCMC, but I think this was an interesting result and wanted to share with other people in the group.

Many thanks for for the tutorial.

1 Like

Thanks for the tips, @yongduek!