Simon Gates writes:
Something published recently that you might consider blogging: a truly terrible article in Lancet Oncology.
It raises the issue of interpreting trials of similar agents and the issue of multiplicity. However, it takes a “dichotomaniac” view and so is only concerned about whether results are “significant” (=”positive”) or not, and suggests applying Bonferroni-type multiplicity adjustments to them. I found it amazing that such an approach was being seriously suggested in 2019 – it’s like decades of systematic reviews, the Cochrane Collaboration and so on never happened.
The first author has a podcast called Plenary Session that is widely listened to by doctors, so he’s quite influential. From what I’ve heard it’s usually pretty good, and raises some really important points about research and clinical trials. But I think he’s got it badly wrong here. The consequences concern me; this is in one of the top journals in oncology; it’s read by doctors treating people for very serious conditions, who make life and death decisions, so it’s really important to get these issues right and not mislead ourselves. I’ve written a letter to the journal (just heard it was accepted this morning) but I don’t think they really have any impact. Most people who read the paper won’t see the letter.
I guess most people who read the paper won’t see this blog either, but here goes . . .
I followed the link and I didn’t think the paper in question was so terrible. I mean, sure, the method they propose is not something that I would ever recommend—but it seemed to me that the main purpose of the paper was not so much to recommend Bonferroni-type multiplicity adjustments (a bad idea, I agree) but rather to warn people not to take published significance tests and confidence intervals seriously when there are forking paths in the data processing and analysis.
And I disagree with their statement that “we need to correct for the portfolio of trials not within a single pharmaceutical company but across all companies”—I think it’s good to analyze the whole portfolio of trials, not to skim the statistically significant results and then try to use statistical methods to try to estimate the rest of the iceberg.
But I agree with their conclusion:
To deliver high-quality cancer care in a sustainable health ecosystem, clinicians, investigators, and policy makers will need to identify therapies that offer benefits that are substantial in magnitude and not statistical artifacts.
So I would not call this a “truly terrible article.” I’d say that it’s a reasonable article that is imperfect in that it is operating under an outdated statistical framework (unfortunately a framework that remains dominant in theoretical and applied statistics!). But, as a whole, it still seems reasonable for doctors to read this sort of paper.
P.S. I wrote this post in part to avoid selection bias. I get lots of emails from people pointing out things they read and liked, and I share these with you. And I get lots of emails from people pointing out things they read and hated, and I share those too. So when I get such emails and I disagree with the reader’s assessment, I should let you know that too!