Deborah Mayo collected some reactions to my recent article, Induction and Deduction in Bayesian Data Analysis.

I’m pleased that that everybody (philosopher Mayo, applied statistician Stephen Senn, and theoretical statistician Larry Wasserman) is so positive about my article and that nobody’s defending the sort of hard-core inductivism that’s featured on the Bayesian inference wikipedia page. Here’s the Wikipedia definition, which I *disagree* with:

Bayesian inference uses aspects of the scientific method, which involves collecting evidence that is meant to be consistent or inconsistent with a given hypothesis. As evidence accumulates, the degree of belief in a hypothesis ought to change. With enough evidence, it should become very high or very low. . . . Bayesian inference uses a numerical estimate of the degree of belief in a hypothesis before evidence has been observed and calculates a numerical estimate of the degree of belief in the hypothesis after evidence has been observed. . . . Bayesian inference usually relies on degrees of belief, or subjective probabilities, in the induction process and does not necessarily claim to provide an objective method of induction.

Again, I disagree with the above quote. As I write in my article, the above does not describe what I do in my applied work. I do go through models, sometimes starting with something simple and building up from there, other times starting with my first guess at a full model and then trimming it down until I can understand it in the context of data. And in any reasonably large problem I will at some point discard a model and replace it with something new.

I’m unhappy when people identify “Bayesian statistics” with a set of procedures that I don’t actually do.

Now here are my reactions to the reactions to my article:

**Mayo, the philosopher**, asks about the connections between Bayesian model checking and the severe testing described by her neo-Popperian philosophy. (Just to be clear, I mean “neo-Popperian” in a good way.) I don’t know the answer to this one, but on some conceptual level I think her approach and mine are similar to each other—and very different from what is described in that Wikipedia excerpt. The Bayesian data analysis that I practice is deductive within a model, with predictive falsification used to compare models.

**Senn, the applied statistician**, points out a hole in statistical practice in general (including my Bayesian methods and also various non-Bayesian inferential approaches) that we don’t fully adjust for the overfitting involved in model rejection and model choice: “Our statements of uncertainty are far from including all uncertainties. This is, perhaps realistic, but it is also not entirely satisfactory.” This echoes a line from page 77 of my article: “My point here is not to say that my preferred methods are better than others but rather to couple my admission of philosophical incoherence with a reminder that there is no available coherent alternative.”

**Wasserman, the theoretical statistician**, writes that “a pragmatic Bayesian will temporarily embrace the Bayesian viewpoint as a way to frame their analysis. But they are willing to step outside the framework and challenge their models and priors and use practical tools like goodness of fit tests.” I agree, with the slight modification that a goodness-of-fit test can itself be Bayesian (as I discuss briefly in sections 4 and 5 of the article under discussion, and in more detail in chapter 6 of Bayesian Data Analysis). Bayes means different things to different people, but to me it’s all about conditional probabilities of unknowns given model and data, and goodness-of-fit tests definitely fit into this framework.

Larry also writes that my “model-centered way of approaching statistics biases [me] against some great distribution-free methods.” I’m sure he’s right about that; see footnotes 1 and 4 of my article. We all tend to use methods with which we are comfortable, and I’m sure there are a lot of useful statistical approaches—Bayesian and non-Bayesian—that I would do well to use, if only I were more familiar with them.

As I wrote in my article, any given Bayesian method can be interpreted as a classical estimator or testing procedure and its frequency properties evaluated; conversely, non-Bayesian procedures can typically be reformulated as approximate Bayesian inferences under suitable choices of model. These processes of translation are valuable for their own sake and not just for communication purposes. Understanding the frequency properties of a Bayesian method can suggest guidelines for its effective application, and understanding the equivalent model corresponding to a classical procedure can motivate improvements or criticisms of the model which can be translated back into better understanding of the procedures. These acts of translation represent just one of the ways in which statistical theory can be relevant for applied statistics.

Thanks, all, for your comments!

It am pretty sure that if Wasserman and his collaborators (Jing Lei & Jamie Robins) were to extend their distribution-free method for constructing prediction sets to three or more sets of data points, they would find it natural to introduce some form of Bayesian analysis.

(Typo at the beginning of my comment: “It” should be “I”.)

Two Bayesian psychologists were walking down the street and meet. The first says,

f(x,\beta), and the second says g(y,\theta). As they go on their way, the first thinks, “Hmm. I wonder what he meant by that?” The second thinks, “Hmm. I wonder what I meant by that?”

Can you give a pointer to the actual wiki page that purports to, but does not, describe the work you do. The paragraph you cite (and for the most part all pages that contain it that I find in a search) talk exclusively about Bayesian inference, not Bayesian statistics. It’s not obvious these are the close enough to being the same thing (I am undecided; I feel I can make weak arguments either way.)

Bxg:

I’m talking about the wikipedia article on Bayesian inference. It’s been rewritten since I took that excerpt, but the section “General formulation” in the current version captures the attitude I don’t like, in which Bayesian inference is all about computing the posterior probability of models.

[…] couple days ago I responded to comments by Mayo, Stephen Senn, and Larry Wasserman. I will respond to Hennig by pulling out […]

[…] Note: March 9, 2012: Gelman has commented to some of our comments on his blog today: http://statmodeling.stat.columbia.edu/2012/03/coming-to-agreement-on-philosophy-of-statistics/ […]