Skip to content
 

No, there is no “tension between getting it fast and getting it right”

When reading Retraction Watch, I came across this quote:

“There is always a tension between getting it fast and getting it right,” said Dr. Marcia Angell, another former editor in chief of the New England Journal of Medicine. “I always favored getting it right. But in the current pandemic, that balance may have shifted too far toward getting it fast.”

On first hearing, this statement sounds reasonable.

Back when I took typing class in 9th grade, they taught us about the tradeoff between speed and accuracy. The faster you can type, the more errors you make.

But I’m thinking this doesn’t apply so much in science. It’s almost the opposite: the quicker you get your ideas out there, the more you can get feedback and find the problems.

OK, not completely. You don’t want to publish something that’s a hot mess, that will just waste the time of your readers.

But I’m guessing that traditional peer review will not catch this sort of thing anyway.

A longer post-publication review period is fine, but let’s do it out in the open. I don’t think a longer secret pre-publication review will help anything.

Sequential decision making, baby. Sequential decision making.

Also check out this comment from Chris Crandall about a recently-retracted paper in psychology:

Decisions about acceptance/rejection/revision are made by the editor and not the reviewers. If there is a failure of peer review here, the responsibility lies primarily with the action editor (and the policy set by the senior editor/association), who selects reviewers, interprets their advice, makes the decision, oversees revisions, and accepts the paper. I have heard from one reviewer of this paper, a very statistically sophisticated person, who tried to get the editor to reject the paper based on the very low quality of the data and the over-interpretation of it.

It’s important to focus on where the real responsibility lies—we’re often unsure about the nature of what the reviewers did, but the ultimately responsibility lies in the hands of the person who made the decision, and not her/his advisors.

9 Comments

  1. yyw says:

    A equally if not more problematic statement from that former EIC of NEJM was that if mistakes were found after publication then the journal has failed. How could someone got to his position with such superficial understanding of science (assuming he was sincere)?

  2. Mendel says:

    One problem with the Santa Clara study was that they wanted to get 3000 particpants fast (like “this weekend” fast), and consequently chose a method to recruit them that meant sacrificing the quality of the data. You can’t fix that with peer review.

    • Jim says:

      But u can reject the paper.

      If papers don’t get rejected when there are obvious major probs then there is much less incentive to do the job right in the first place.

      • Zad says:

        Yeah, but what does rejection by a few hand chosen experts behind curtains actually do besides result in the paper/data being resubmitted elsewhere or filed away in a drawer that never sees the light of day? If there are obvious problems with a study/paper that are huge, better to discuss them publicly rather than have a rejection behind closed doors and never learn from it

        • Joshua says:

          Zad –

          > Yeah, but what does rejection by a few hand chosen experts behind curtains actually do besides result in the paper/data being resubmitted elsewhere or filed away in a drawer that never sees the light of day? If there are obvious problems with a study/paper that are huge, better to discuss them publicly rather than have a rejection behind closed doors and never learn from it

          Maybe I need to be more cynical – but I like to believe that some % of papers that are rejected in first review (with comments on what the flaws were) can result in revisions before a second submission to a second journal. Recursively.

      • Mendel says:

        What I’m saying is that there is a knock-on effect.
        If you know your paper is going to be stuck in peer review for 3 months anyway before anyone else gets to read it, saving a few days at the data collection stage is immaterial. The study design is affected by the prospect of getting a paper out quickly.

        What we saw at first were good pre-print papers on the epidemic gaining traction without being traditionally published; which was ok when those papers were fewer and published by groups at the top of the field.
        But once the volume of pre-publications increases and the average quality drops, picking out the wheat from the chaff becomes a crowd process similar to up-voting posts on social media. It burdens the readers with more due diligence, and potentially gives the scientific equivalent of conspiracy theories more of a platform.

        Whether that kind of process works better than traditional peer review or not seems like an interesting topic for research.

  3. Anthony says:

    I get the impression that pre-publication peer review focuses on making sure the paper presents its hypothesis, methods, data, and conclusion in a readable and logical way, and that the math isn’t too egregiously wrong. But it’s primarily an inside-view check which won’t catch things like weirdness in the data set or participant pool, or inappropriate statistical manipulation, or other flaws much more visible to outside-view review.

    The problem is that the inside-view checks need to be done, which means reviewers immersed in the field, which means a very low likelihood of getting a decent outside-view review. Posting to pre-print servers or other ways of getting one’s research in the open should make it more likely to get a competent outside-view review, but that depends on anyone at all having a look.

  4. Ron Kenett says:

    The quality productivity paradox is this:
    If you improve quality, you enhance productivity and get more for less.. This is the title of Deming’s book which was the basis for his workshops: :Quality, productivity and the competitive position”
    On the other hand if you push for increase productivity, you push down quality and this spiral gets you to get less quality, less productivity etc..

    To resolve this paradox, the trick is to take a process view. Improving the process is leading you to the path of improved quality and enhanced productivity.

    The same applies to peer reviews. I made an attempt to improve the process by setting up checklists. Such checklists are used in healthcare and this blog mentioned it. They are also used in aviation. Why not in peer reviews??? See https://ssrn.com/abstract=3591808

  5. I think Chris Crandall has a point: reviewers, being anonymous, cannot take any responsibility; the editors take it all.

Leave a Reply