Clarification: Somebody pointed out that, when people come here from a web search, they won’t realize that it’s an April Fool’s joke. See here for my article in Bayesian analysis that expands on the blog entry below, along with discussion by four statisticians and a rejoinder by myself that responds to the criticisms that I raised.
Below is the original blog entry.
Bayesian inference is a coherent mathematical theory but I wouldn’t trust it in scientific applications. Subjective prior distributions don’t inspire confidence, and there’s no good objective principle for choosing a noninformative prior (even if that concept were mathematically defined, which it’s not). Where do prior distributions come from, anyway? I don’t trust them and I see no reason to recommend that other people do, just so that I can have the warm feeling of philosophical coherence.
Bayesian theory requires a great deal of thought about the given situation to apply sensibly, and recommending that scientists use Bayes’ theorem is like giving the neighborhood kids the key to your F-16. I’d rather start with tried and true methods, and then generalizing using something I can trust, like statistical theory and minimax principles, that don’t depend on your subjective beliefs. Especially when the priors I see in practice are typically just convenient conjugate forms. What a coincidence that, of all the infinite variety of priors that could be chosen, it always seems like the normal, gamma, beta, etc., that turn out to be the right choice?
To restate these concerns mathematically: I like unbiased estimates and I like confidence intervals that really have their advertised confidence coverage. I know that these aren’t always going to be possible, but I think the right way forward is to get as close to these goals as possible and to develop robust methods that work with minimal assumptions. The Bayesian approach–to give up even trying to approximate unbiasedness and to instead rely on stronger and stronger assumptions–that seems like the wrong way to go.
In the old days, Bayesian methods at least had the virtue of being mathematically clean. Nowadays, they all seem to be computed using Markov chain Monte Carlo, which means that, not only can you not realistically evaluate the statistical properties of the method, you can’t even be sure it’s converged, just adding one more item to the list of unverifiable assumptions.
People tend to believe results that support their preconceptions and disbelieve results that surprise them. Bayesian methods encourage this undisciplined mode of thinking. I’m sure that many individual Bayesian statisticians and are acting in good faith, but they’re providing encouragement to sloppy and unethical scientists everywhere. And, probably worse, Bayesian techniques motivate even the best-intentioned researchers to get stuck in the rut of prior beliefs.
Bayesianism assumes: (a) Either a weak or uniform prior, in which case why bother?, (b) Or a strong prior, in which case why collect new data?, (c) Or more realistically, something in between, in which case Bayesianism always seems to duck the issue.
Nowadays people use a lot of empirical Bayes methods. I applaud the Bayesians’ newfound commitment to empiricism but am skeptical of this particular approach, which always seems to rely on an assumption of “exchangeability.” I do a lot of work in political science, where people are embracing Bayesian statistics as the latest methodological fad. Well, let me tell you something. The 50 states aren’t exchangeable. I’ve lived in a few of them and visited nearly all the others, and calling them exchangeable is just silly. Calling it a hierarchical or a multilevel model doesn’t change things–it’s an additional level of modeling that I’d rather not do. Call me old-fashioned, but I’d rather let the data speak without applying a probability distribution to something like the 50 states which are neither random nor a sample.
Also, don’t these empirical Bayes methods use the data twice? If you’re going to be Bayesian, then be Bayesian: it seems like a cop-out and contradictory to the Bayesian philosophy to estimate the prior from the data. If you want to do hierarchical modeling, I prefer a method such as generalized estimating equations that makes minimal assumptions.
And don’t even get me started on what Bayesians say about data collection. The mathematics of Bayesian decision theory lead inexorably to the idea that random sampling and random treatment allocation are inefficient, that the best designs are deterministic. I have no quarrel with the mathematics here–the mistake lies deeper in the philosophical foundations, the idea that the goal of statistics is to make an optimal decision. A Bayes estimator is a statistical estimator that minimizes the average risk, but when we do statistics, we’re not trying to “minimize the average risk,” we’re trying to do estimation and hypothesis testing. If the Bayesian philosophy of axiomatic reasoning implies that we shouldn’t be doing random sampling, then that’s a strike against the theory right there. Bayesians also believe in the irrelevance of stopping times–that, if you stop an experiment based on the data, it doesn’t change your inference. Unfortunately for the Bayesian theory, the p-value _does_ change when you alter the stopping rule, and no amount of philosophical reasoning will get you around that point.
I can’t keep track of what all those Bayesians are doing nowadays–unfortunately, all sorts of people are being seduced by the promises of automatic inference through the “magic of MCMC”–but I wish they would all just stop already and get back to doing statistics the way it should be done, back in the old days when a p-value stood for something, when a confidence interval meant what it said, and statistical bias was something to eliminate, not something to embrace.