Learning from and responding to statistical criticism

In 1960, Irwin Bross, “a public-health advocate and biostatistician . . . known for challenging scientific dogmas” published an article called “Statistical Criticism.” Here it is.

A few months ago, Dylan Small, editor of the journal Observational Studies, invited various people including me to write a comment on Bross’s article.

Here’s what I wrote:

Irwin Bross’s article, “Statistical Criticism,” gives advice that is surprisingly current, given that it appeared in the journal Cancer nearly sixty years ago. Indeed, the only obviously dated aspects of this paper are the use of the generic male pronoun and the sense that it was still an open question whether cigarette smoking caused lung cancer.

In his article, Bross acts a critic of criticism, expressing support for the general form but recommending that critics go beyond hit-and-run, dogmatism, speculation, and tunnel vision. This all seems reasonable to me, but I think criticisms can also be taken at face value. If I publish a paper and someone replies with a flawed criticism, I still should be able to respond to its specifics. Indeed, there have been times when my own work has been much improved by criticism that was itself blinkered but which still revealed important and fixable flaws in my published work.

I would go further and argue that nearly all criticism has value. Again, I’ll place myself in the position of the researcher whose work is being slammed. Consider the following sorts of statistical criticism, aligned in roughly decreasing order of quality:

A thorough, comprehensive reassessment. . . .

A narrow but precise correction. . . .

Identification of a potential problem. . . .

Confusion. . . .

Hack jobs. . . .

Another way to see the value of post-publication criticism, even when it is imperfect, is to consider the role of pre-publication review. It is perfectly acceptable for a peer reviewer to raise a narrow point, to speculate, or to point out a potential data flaw without demonstrating that the problem in question is consequential. Referees are encouraged to point out potential concerns, and it is the duty of the author of the paper to either correct the problems or to demonstrate their unimportance. Somehow, though, the burden of proof shifts from the author (in the pre-publication stage) to the critic (after the paper has been published). It is not clear to me that either of these burdens is appropriate. I would prefer a smoother integration of scientific review at all stages, with pre-publication reports made public and post-publication reports being appended to published articles.

Overall, I am inclined to paraphrase Al Smith and reply to Bross that the ills of criticism can be cured by more criticism. That said, I recognize that any system based on open exchange can be hijacked by hacks, trolls, and other insincere actors. The key issues in dealing with such people are economic and political, not statistical, but we still need to be able to learn from and respond to statistical criticisms, whatever their source.

P.S. The journal Observational Studies has posted the original article by Bross and all the discussions (by William Fairley and William Huber, Joseph Gastwirth, Jennifer Hill and Katherine Hoggatt, Daniel Ho, Charles Reichardt, David Rindskopf, Paul Rosenbaum and Dylan Small, and me).

4 thoughts on “Learning from and responding to statistical criticism

  1. > “We can’t hope to anticipate all possible misreadings of our work, but it is good to take advantage of opportunities to clarify.”
    Unless the misreading was agnotologistic (encouraged to make the work seem more important/relevant than it actually is). Then one would not want an opportunity to clarify the work’s lack of relevance/importance.

    Of course, echoing Sameera’s point, the critic should never be 100 % certain of that.

Leave a Reply to Corey Yanofsky Cancel reply

Your email address will not be published. Required fields are marked *