One thing that struck me about this PACE scandal: if this study was so bad as all that, how did it taken so seriously by policymakers and the press?
There’s been a lot of discussion about serious flaws in the published papers, and even more discussion about the unforgivable refusal of the research team to share their data. But the question I want to address here is, how did they get into the position where this research got taken seriously in the first place?
As David Tuller, a public health and journalism lecturer at UC Berkeley, put it:
The thing about PACE that has astonished me is how papers with such obvious flaws were accepted immediately by the entire UK academic/psychiatric/medical establishment, including the Lancet. It’s a case of mass confirmation bias. The emperor really has no clothes, and the patients have known it for years and have been screaming about it. But they have been dismissed as irrational and dangerous. I got interested because a friend has been sick for 20+ years and he nudged me to look into PACE.
No paper with an analysis in which you can get worse and be counted as improved should ever be published. Their outcome thresholds for being “recovered” or “within normal range” on their two primary outcomes of fatigue and physical function demonstrated worse health than the entry scores required to demonstrate disability for both measures [see here for details]. It’s absurd. And yet the study has been presented as definitive for years—in the literature, public health agencies, and in public understanding.
Again, if it’s that bad, how did it stand for so long? One possibility is that that analysis wasn’t so horrible, but given everything I’ve been hearing—and the lack of any convincing defense from the paper’s authors—I’m inclined to think that Tuller’s correspondents are probably right, that the study is fatally flawed.
So, again, how did it stay afloat for so long.
Part of it must be that it was a large, well-funded study on an important problem. I guess there’s no other comparable study to knock it off its perch.
Another factor is the reputation of the Lancet, characterized by Wikipedia as follows:
The Lancet is a weekly peer-reviewed general medical journal. It is one of the world’s oldest and best known general medical journals, and has been described as one of the most prestigious medical journals in the world. . . . In the 2014 Journal Citation Reports, The Lancet’s impact factor was ranked second among general medical journals, (at 45.217), after The New England Journal of Medicine (55.873).
Getting published in any top journal, Lancet or Science or Nature or PPNAS or any other, is a crapshoot. Any paper is hard to get published in these highly competitive journals, but whatever does get published is pretty random. But then once the paper is there, many people will consider it as correct unless there’s a huge backlash against it.
What I’m saying is, I suspect that the Lancet’s brand name gave this paper a pass.
In this way, the Lancet (and other high-profile journals such as PPNAS) play a role in science publishing, that is similar to the Ivy League in universities: It’s hard to get in, but once you’re in, you have that Ivy League credential, and you have to really screw up to lose that badge of distinction.
Or, to bring up another analogy I’ve used in the past, the current system of science publication and publicity is like someone who has a high fence around his property but then keeps the doors of his house unlocked. Any burglar who manages to get inside the estate then has free run of the house.
In this case, the “fence” is the requirement that a research paper overcome a series of hurdles: there should be a statistically significant p-value, ideally there should be some random sampling or random assignment, there should be publication in a top journal. Once the hurdles are overcome, it takes a lot a lot a lot to shake faith in it. Sure, there are some famous examples of published papers that have become laughingstocks (ESP, himmicanes, ovulation and voting, air pollution in China, children of beautiful parents, etc.), but that’s my point: it took a lot of slamming to discredit even these hugely-flawed studies.
Let me put it another way: Yes, it’s hard to get a paper published in the Lancet, Science, Nature, Psych Science, PPNAS, etc. Really hard. But the difficulty of acceptance does not imply that the papers that do get accepted are always any good. Publication is a random process. Newsworthiness counts for a lot, and newsworthiness can at times get in the way of science. An attitude of certainty counts for a lot (remember all those p-values), and certainty can get in the way of science too.
What about the Lancet’s review process for the now-notorious PACE paper? Tuller writes:
In explaining The Lancet’s decision to publish the results, Horton told the interviewer [for Australian radio] that the paper had undergone “endless rounds of peer review.” Yet the ScienceDirect database version of the article indicated that The Lancet had “fast-tracked” it to publication. According to current Lancet policy, a standard fast-tracked article is published within four weeks of receipt of the manuscript.
Endless rounds of review in four weeks. Shades of Zeno here. Or maybe the fast-tracking procedure has changed since 2011, when the paper was published.
Anyway, even if the paper had hadn’t been fast-tracked in four weeks and even if the review really did have “endless rounds,” it’s still just a few people reviewing, and typically these reviewers have neither access to the raw data nor the time for careful reanalysis.
That’s fine—reviewers are busy, and they’re reviewing for free. I review dozens of submitted journal articles a year, I consider this part of the “service” aspect of my job. It’s not the reviewer’s job to catch every problem.
But that’s my point. So what if the paper had endless rounds of peer review?
As Dan Kahan might say, what do you call a flawed paper that was published in a journal with impact factor 50 after endless rounds of peer review? A flawed paper.
And that’s fine too, it’s ok for Lancet to publish flawed papers. If you run a journal, you’ll publish flawed papers. Mistakes are inevitable. What I can’t excuse is the journal editor’s dogged defense of a flawed paper. What’s that all about?
And this brings us to the topic of the current post.
Journalists and policymakers are trained to believe things that appear in top journals. On the other hand, reputations change. Psychological Science used to be considered a top journal, and maybe it will be considered a top journal again, but right now it’s notorious for junk science. The American Sociological Review is considered the top journal in that field, but my own experience is that they refused to run a correction. Why? Because they don’t run corrections. That doesn’t give me so much confidence in the papers in that journal that I haven’t happened to look at. PPNAS has the himmicanes and hurricanes and other such studies. As for the Lancet . . . there’s this study and then there was that Iraq deaths paper from a few years back. These are the two papers that first come to mind when I hear “The Lancet.” Not such good news for the journal’s reputation.
My point is not that Lancet papers are worse than those in other journals. My concern is that Lancet papers are inappropriately taken more seriously than they should. Publishing a paper in Lancet is fine. But then if the paper has problems, it has problems. At that point it shouldn’t try to hide behind the Lancet reputation, which seems to be what is happening. And, yes, if that happens enough, it should degrade the journal’s reputation. If a journal is not willing to rectify errors, that’s a problem no matter what the journal is.
Remember Newton’s third law? It works with reputations too. The Lancet editor is using his journal’s reputation to defend the controversial study. But, as the study becomes more and more disparaged, the sharing of reputation goes the other way.
I can imagine the conversations that will occur:
Scientist A: My new paper was published in the Lancet!
Scientist B: The Lancet, eh? Isn’t that the journal that published the discredited Iraq survey, the Andrew Wakefield paper, and that weird PACE study?
A: Ummm, yeah, but my article isn’t one of those Lancet papers. It’s published in the serious, non-politicized section of the magazine.
B: Oh, I get it: The Lancet is like the Wall Street Journal—trust the articles, not the opinion pages?
A: Not quite like that, but, yeah: If you read between the lines, you can figure out which Lancet papers are worth reading.
B: Ahhh, I get it.
Now we just have to explain this to journalists and policymakers and we’ll be in great shape. Maybe the Lancet could use some sort of tagging system, so that outsiders can know which of its articles can be trusted and which are just, y’know, there?
Long run, reputation should catch up to reality. But before the long run comes, there are a few people out there with chronic fatigue syndrome who don’t feel like waiting.
P.S. I don’t often edit Wikipedia, but this time I was moved to round off those ridiculously over-precise numbers in the quote near the top of this post. I also added an item on the PACE study to the “Controversy” section of the page.