Today I received my copy of “Handbook of Markov Chain Monte Carlo“. Up to now I have consulted “Monte Carlo Strategies in Scientific Computing” of Jun S.Liu. In this post I will give my first thought on the new book.

The book of Liu is indeed awesome. The author himself has made many original contributions to sampling algorithms, and many of his ideas was explained in the book. The chapters on Gibbs Sampler (chapter 6),General Conditional Sampling (chapter 7) and multi-chain MCMC (chapter 10, 11) were excellent. But given that the book was written 10 years ago, many recent developments is missing. Some algorithms were not given enough spaces (in particular, Reversible Jump MCMC was given only 2 pages!).

The handbook covers many recent developments such as likelihood-free MCMC, Adaptive MCMC…, and treats Reversible Jump MCMC with the details it deserves (20 pages). Likelihood-free MCMC is one computational method for Approximate Bayesian Computation (ABC), which recently gains many attention. Instead of computing the likelihood ratio in the acceptance probability of the Metropolis-Hasting (MH) algorithm, one tries to simulate this ratio by generating simulated data. Adaptive MCMC is a general name for a class of MCMC which parameters can be tuned automatically during the search. For example, consider a MCMC kernel that consists of a Gibbs sampler and a MH sampler. With probability one will perform the Gibbs move, and with probability one will perform the MH move. The problem is had to be fixed, while we often want to change along the course of the search, for example larger at the beginning in order to make large transitions around the parameter space to quickly identify promising areas, and smaller later to better explore the local landscape around good candidates. One can not change carelessly, since there *was* no guarantee that the resulted MCMC kernel would converge to the desired distribution.

The book start with theoretical chapters discussed many aspects of MCMC. Each chapter is written by distinguished researchers in the corresponding field. One can expect to learn from experiences and perspectives of many experts in just one book. The chapters that pique my interest right now are Reversible Jump MCMC by Fan and Sisson, Adaptive MCMC by Jeffrey Rosenthal, Hamiltonian MCMC by Radford Neal, and Likelihood-free MCMC by Sisson and Fan.

The second part of the book consists of applications and case studies, with examples range from educational research to high-energy astrophysics. This is certainly richer than examples from Liu’s book, which were biased toward Liu’s research field. But to tell the truth I am not that interested in reading about MCMC in educational research or astrophysics.

On a final note, the book did *not* cover all the new developments in MCMC up to 2011. The biggest missing part is perhaps Particle MCMC, which is gaining more and more attentions. Some other notable algorithms that I really wish they had been mentioned are the equi-energy sampler and the Riemann Manifold MCMC. The equi-energy sampler was a simple-yet-powerful algorithm introduced in 2004. The sampler has a large memory size to remember where it has been, and use this information to speed up convergence. I think a detail discussion will help beginners like me understand better strong&weak points of it, compared to Population MCMC. About the Particle MCMC and Riemann Manifold MCMC, I think intuitive introductory discussions may help the non-experts like me to gather enough courage to delve into those equation-packed papers.