I couple years ago I wrote an article, “It’s too hard to publish criticisms and obtain data for replication.” I gave two examples demonstrating the struggles of myself and others to get journals to admit errors. The problem is that the standards for post-publication review are higher than for pre-publication review. You can find an error that would clearly make a paper ineligible for publication—but if the paper has already appeared, it’s not enough to point out the error, you need to demonstrate “irrefutable proof,” in the words of one of the authors whose work had been questioned. We’ve talked about this a lot, the idea that once a paper is published it is supposed to have some special truth-status.
Recently David Allison, Andrew Brown, Brandon George, and Kathryn Kaiser published an important article making similar points but focusing on journals’ policies on corrections.
It’s not a pretty sight. Here are some quotes:
In the course of assembling weekly lists of articles in our field, we began noticing more peer-reviewed articles containing what we call substantial or invalidating errors. These involve factual mistakes or veer substantially from clearly accepted procedures in ways that, if corrected, might alter a paper’s conclusions. . . .
After attempting to address more than 25 of these errors with letters to authors or journals, and identifying at least a dozen more, we had to stop — the work took too much of our time. . . .
Too often, the process spiralled through layers of ineffective e-mails among authors, editors and unidentified journal represent- atives, often without any public statement added to the original article. . . .
Science relies essentially but compla- cently on self-correction, yet scientific publishing raises severe disincentives against such correction. One publisher states that it will charge the author who initiates withdrawal of a published paper US$10,000.
Journals rarely state whom to contact about potentially invalidating errors. We had to guess whether to send letters to a staff member or editor, formally submit the letter as a manuscript, or contact the authors of a paper directly.
That’s what happened with me with the American Sociological Review. They have no method of publishing corrections. All I could do was submit my correction as its own article. In the review process they did not disagree with my points at all, but they refused to publish the correction on the grounds that they only publish the very best submissions to ASR. Of course, they published the original article with its error, but since the error was not caught in the original round of reviewing, I guess it stands forever as ASR-worthy!
Allison et al. continue:
For one article that we believed contained an invalidating error, our options were to post a comment in an online com- menting system or pay a ‘discounted’ submission fee of US$1,716. With another journal from the same publisher, the fee was £1,470 (US$2,100) to publish a letter. Letters from the journal advised that “we are unable to take editorial considerations into account when assessing waiver requests, only the author’s documented ability to pay”.
Wow! I wonder what publisher that was! I’m reminded of that journal “Wiley Interdisciplinary Reviews: Computational Statistics” which was charging people $2800 for that article that Weggy plagiarized from Wikipedia. Who knows, though, maybe a garbled and plagiarized Wikipedia article is worth the price—it does bear the signature of a recipient of the American Statistical Association Founders Award. . . .
Allison et al. conclude:
Scientists who engage in post-publication review often do so out of a sense of duty to their community, but this important work does not come with the same prestige as other scientific endeavours. Recognizing and incentivizing such activities could go a long way to cleaning up the literature.
Our work was not a systematic search; we simply looked more closely at papers that caught our eye and that we were prepared to assess. We do not know the rate of errors or the motivations behind them (that is, whether they are honest mistakes or a ‘sleight of statistics’). But we showed that a small team of investigators with expertise in statistics and experimental design could find dozens of problematic papers while keeping abreast of the literature. Most were detected simply by reading the paper.
No joke. Especially if you include as errors things like basing strong claims on p-values that are subject to the garden of forking paths, or miscalculated p-values that get pushed over to the right side of .05. You’d have to trash some huge chunk of the literature on embodied cognition, and I think literally half the papers that have appeared in the journal Psychological Science in the past five years. (See, for example, slide 16 here).
That’s part of the problem right there. If that many papers get retracted, and if every retraction required its own investigation, it would suck up all the resources of the journals for years. And this doesn’t even get into the costs in prestige. If PPNAS retracts the himmicanes paper and the Chinese air pollution paper and all the rest of those noise-chasing social science papers it’s been publishing, if JPSP goes into its archives and retracts all the erroneous papers in its history . . . can you imagine?
So unfortunately I don’t think it’s possible. Reform would be great, post-publication review is great, but I think we just have to give up on retraction. The system is just too big to fail.
The new rules
Let’s just put a bright line down right now. 2016 is year 1. Everything published before 2016 is provisional. Don’t take publication as meaning much of anything, and just cos a paper’s been cited approvingly, that’s not enough either. You have to read each paper on its own. Anything published in 2015 or earlier is part of the “too big to fail” era, it’s potentially a junk bond supported by toxic loans and you shouldn’t rely on it.