“The well-known 0.234”

I came across this abstract from Mylene Bedard:

Optimal acceptance rates for Metropolis algorithms: moving beyond 0.234

In this talk, we shall optimize the efficiency of random walk Metropolis algorithms for multidimensional target distributions with scaling terms possibly depending on the dimension. We show that when there does not exist any component having a scaling term significantly smaller than the others, the asymptotically optimal acceptance rate is the well-known 0.234. We also show that when this condition is not met the limiting process of the algorithm is altered, yielding an asymptotically optimal acceptance rate which might drastically differ from the usual 0.234. In particular, we prove that as the d increases the sequence of stochastic processes formed by say the component of each Markov chain usually converges to a Langevin diffusion process with a distinct speed measure, except in particular cases where it converges to a onedimensional Metropolis-Hastings algorithm with a singular acceptance rule. We also discuss the use of inhomogeneous proposals, which might reveal essential in specific cases.

This makes me so happy–the well-known 0.234. It’s so cool to have a mathematical constant all my own (ok, I guess it’s shared with Gareth, Wally, and Yuval, but still…). This kind of thing makes it all worth it.

Just for laffs, I googled “0.234”, and our result was the 6th link! Not quite the fame of “137” but still something.

P.S. Here’s Bedard’s paper and here’s the original 0.234 paper where it all started. It was lots of fun figuring that out. (And I’m sure Gareth and Jeff have had lots of fun extending these results far beyond anything I could’ve done.)