Skip to content

They misreport their experiments and don’t fess up when they’ve been caught.

Javier Benitez points us to this paper, “COMPare: Qualitative analysis of researchers’ responses to critical correspondence on a cohort of 58 misreported trials,” by Ben Goldacre, Henry Drysdale, Cicely Marston, Kamal Mahtani, Aaron Dale, Ioan Milosevic, Eirion Slade, Philip Hartley and Carl Heneghan, who write:

Discrepancies between pre-specified and reported outcomes are an important and prevalent source of bias in clinical trials. COMPare (Centre for Evidence-Based Medicine Outcome Monitoring Project) monitored all trials in five leading journals for correct outcome reporting, submitted correction letters on all misreported trials in real time, and then monitored responses from editors and trialists. . . .

Trialists frequently expressed views that contradicted the CONSORT (Consolidated Standards of Reporting Trials) guidelines or made inaccurate statements about correct outcome reporting. Common themes were: stating that pre-specification after trial commencement is acceptable; incorrect statements about registries; incorrect statements around the handling of multiple time points; and failure to recognise the need to report changes to pre-specified outcomes in the trial report. We identified additional themes in the approaches taken by researchers when responding to critical correspondence, including the following: ad hominem criticism; arguing that trialists should be trusted, rather than follow guidelines for trial reporting; appealing to the existence of a novel category of outcomes whose results need not necessarily be reported; incorrect statements by researchers about their own paper; and statements undermining transparency infrastructure, such as trial registers.

In short: It’s not just Wansink. Pizzagate’s the tip of the iceberg.

Tomorrow’s post: “Persistent metabolic youth in the aging female brain”??


  1. Roy Tamura says:

    Thanks for posting that article. I don’t agree with everything that the COMPare considers an ‘outcome’ but the assertion that public registries’ pre-specified outcomes and publication results should match is sound. The takeaway for those of us in medical research is that we need to be more diligent in making sure registries like are updated as protocols are amended and also that our publications correctly identify those outcomes which were pre-specified and those which weren’t.

  2. They do seem to suggest that “Disagreement with the general approach of COMPare/CONSORT” is simply not allowed*.
    (Group think rearing its ugly head?)

    Now, I guess as a disclaimer, I have worked with the three executives of CONSORT* and many of their colleagues and often raised critical concerns with parts of their work.

    * COMPare Centre for Evidence-Based Medicine Outcome Monitoring Project, CONSORT Consolidated Standards of Reporting Trials

    Now, on a positive note, this sounds interesting –
    We argue that prospective cohort studies which additionally issue correction letters in real time on each individual flawed study—and then follow-up responses from trialists and journals—are more impactful, more informative for those consuming the studies critiqued, more informative on the causes of shortcomings in research, and a better use of research resources.

  3. RJB says:

    I always look at these things like an accountant. It sounds like COMPare/CONSORT came up with a list of accounting rules, did a post-publication check for noncompliance, and asked for corrections. If the journals claim to demand compliance and the authors in fact didn’t comply, asking for corrections is fine.

    But the authors of this article aren’t using the language about compliance, they are using language about what is and is not “correct”, and viewing every response other than immediate compliance as a character flaw.

    I didn’t read the article closely enough to see exactly what the noncompliance issues were, but no accounting system is perfect, and it wouldn’t surprise me at all to find (as accountants find all the time) that (1) people disagree about which rules are ‘correct’ in the general case, and (2) that in a specific case, some people will reasonably believe that complying with the rules actually makes reports less informative. Of course, those that set the rules rarely like such talk: they want people to believe the rules are sacrosanct and uncontroversial.

    Maybe if I dug into the article, I would agree with the rule-setters, but they come across as overly confident in their rules, and overly judgmental of any who would question them.

    • > come across as overly confident in their rules, and overly judgmental of any who would question them.
      That was my take and I recall Stephen Senn writing something similar with earlier efforts.

      Just part of that false sense about being able to be certain about methodology.

Leave a Reply