A differential evolution MCMC algorithm

Cajo Ter Braak just published this paper in Statistics and Computing. It’s an automatic Metropolis-like algorithm that seems to automatically work to perform adaptive jumps. Perhaps could be useful in a Umacs or Bugs-like setting? Here’s the abstract:

Differential Evolution (DE) is a simple genetic algorithm
for numerical optimization in real parameter spaces.
In a statistical context one would not just want the optimum
but also its uncertainty. The uncertainty distribution can be
obtained by a Bayesian analysis (after specifying prior and
likelihood) using Markov Chain Monte Carlo (MCMC) simulation.
This paper integrates the essential ideas of DE and
MCMC, resulting in Differential Evolution Markov Chain
(DE-MC). DE-MC is a population MCMC algorithm, in
which multiple chains are run in parallel. DE-MC solves
an important problem in MCMC, namely that of choosing an
appropriate scale and orientation for the jumping distribution.
In DE-MC the jumps are simply a fixed multiple of the
differences of two random parameter vectors that are currently
in the population. The selection process of DE-MC
works via the usual Metropolis ratio which defines the probability
with which a proposal is accepted. In tests with known
uncertainty distributions, the efficiency of DE-MC with respect
to random walk Metropolis with optimal multivariate
Normal jumps ranged from 68% for small population sizes
to 100% for large population sizes and even to 500% for
the 97.5% point of a variable from a 50-dimensional Student
distribution. Two Bayesian examples illustrate the potential
of DE-MC in practice. DE-MC is shown to facilitate multidimensional
updates in a multi-chain “Metropolis-within-
Gibbs” sampling approach. The advantage of DE-MC over
conventionalMCMCare simplicity, speed of calculation and
convergence, even for nearly collinear parameters and multimodal
densities.

I’d appreciate anyone’s thoughts on this.

2 thoughts on “A differential evolution MCMC algorithm

  1. Differential evolution has been a very popular method used in evolutionary computing. When we observed its behavior, it seemed to be able to focus the "jumping" to within only those dimensions that were relevant for sampling. But it does this in a systematic way, essentially maintaining an "interesting" part of the parameter space using something like a convex hull. The trouble with it is that if it is not initialized correctly (it has a number of parameters), it will rapidly fall into a local minimum and find it hard to escape (because the hull becomes very small). Maybe DE-MC solves this problem.

  2. I have put a couple of DE MCMC algorithms into OpenBUGS developer version. One does single sit updating, the other block updating. Blocks are choosen using the usual BUGS practice. One slight problem with DEMCMC is that sometimes it does not move. I have therefore modified the jump distance every 10th iteration by diving by 10. Is there a better approach? Some sort of adaption. So far the block sizes are fairly small (say 25 nodes).

    Moi

Comments are closed.