I came across this article by the late Leo Breiman from 1997, “No Bayesians in foxholes.” It’s fun sometimes to go back and see what people were saying nearly a decade ago. This one is particularly interesting because it presents a strongly anti-Bayesian position which used to be common in statistics (see, for example, various JRSS-B discussions during the 70s and 80s) but you don’t really hear about anymore. Breiman wote:
The Current Index of Statistics lists all statistics articles published since 1960 by author, title, and key words. The CIS includes articles from a multitude of journals in various fields—medical statistics, reliability, environmental, econometrics, and business management, as well as all of the statistics journals. Searching under anything that contained the word “data” in 1995–1996 produced almost 700 listings. Only eight of these mentioned Bayes or Bayesian, either in the title or key words. Of these eight, only three appeared to apply a Bayesian analysis to data sets, and in these, there were only two or three parameters to be estimated.
Actually, our toxicology paper appeared in the Journal of the American Statistical Association in 1996—how could Breiman have missed that one (our model had 90 parameters, and the paper had a detailed discussion of why the prior distribution was needed in order to get reasonable results)? Was he restricting himself to papers with “data” in their keywords? Putting “data” as a keyword in an applied statistics paper is something like putting “physics” as a keyword in a physics paper!
OK, OK . . .
My point here isn’t to pick on Breiman, who isn’t around to defend himself (when we were both at Berkeley, I tried to talk with him about Bayesian methods, but we never found the time for the conversation, something I strongly regret in retrospect), but rather to reiterate a point I’ve made elsewhere, which is how our attitudes toward methods are so strongly shaped by our direct experiences. Continuing my quoting from the Breiman article:
I [Breiman] spent 13 years as a full-time consultant and continue to consult in many fields today—air-pollution prediction, analysis of highway traffic, the classification of radar returns, speech recognition, and stockmarket prediction, among others. Never once, either in my work with others or in anyone else’s published work in the fields in which I consulted, did I encounter the application of Bayesian methodology to real data.
. . .
All it would take to convince me [about Bayesian methods] are some major success stories in complex, high-dimensional problems where the Bayesian approach wins big compared to any frequentist approach. . . . A success story is a tough problem on which numbers of people have worked where a Bayesian approach has done demonstrably better than any other approach.
Now that these success stories are out there (and are reachable with almighty Google—which puts the Current Index of Statistics to shame—or by flipping through various textbooks), I suppose Breiman would have been convinced. What’s funny is that he couldn’t just say that he had made great contributions to statistics, and others had made important contributions to applied problems using Bayesian methods. He had to say that “when big, real, tough problems need to be solved, there are no Bayesians.”
I think that a more pluralistic attitude is more common in statistics today, partly through the example of people like Brad Efron who’ve had success with both Bayesian and non-Bayesian methods, and partly through the pragmatic attitudes of computer scientists, who neither believe the extreme Bayesians who told them that they must use subjective Bayesian probability (or else—gasp—have incoherent inferences) nor the anti-Bayesians who talked about “tough problems” without engaging with research outside their subfields.
My impression is that there’s a lot more openness now, and a willingness in evaluating methods to go beyond the two poles of pure subjectivism (like those Bayesians at the 1991 Valencia meeting who were opposed in principle to checking model fit) and barren significance testing (like those papers that used to appear in the statistical journals with tables and tables of simulations of coverage probabilities). It’s refreshing to see the errors of even the experts of a decade ago—perhaps this will give us courage to make our own rash statements which can in their turn be overtaken by reality.