Great news: “From next year, eLife is eliminating accept/reject decisions after peer review, instead focusing on public reviews and assessments of preprints.”

Valentin Amrhein points to this new policy from a biology journal. Here it is:

From next year, we will no longer make accept/reject decisions at the end of the peer-review process; rather, all papers that have been peer-reviewed will be published on the eLife website as Reviewed Preprints, accompanied by an eLife assessment and public reviews. The authors will also be able to include a response to the assessment and reviews.

The decision on what to do next will then entirely be in the hands of the author; whether that’s to revise and resubmit, or to declare it as the final Version of Record.

Here’s the journal’s five-step process:

1. Submission

Submit your paper and you’ll hear if it is invited for peer review.

2. Peer review

Your paper undergoes consultative review by experts in the field and a publication fee is collected. You will then receive an eLife assessment, public reviews and confidential recommendations from reviewers on how to improve the paper.

3. Publication

Your paper is published on eLife’s website as a Reviewed Preprint along with the eLife assessment and public reviews. It is then citable.

The eLife assessment reflects the significance of the findings and the strength of the evidence reported in the preprint. You will also be able to include a response to the assessment and reviews.

4. Author revision

You control which revisions to make, and if and when to resubmit. If you revise, we will publish a new Reviewed Preprint with updated reviews and assessment.

5. Version of Record

At any point following peer review, you can choose to have your Reviewed Preprint published as the ‘Version of Record’.

This sounds just great.

31 thoughts on “Great news: “From next year, eLife is eliminating accept/reject decisions after peer review, instead focusing on public reviews and assessments of preprints.”

  1. This generated a lot of discussion when it was announced (Oct. 2022), and I wonder how its first month of implementation has gone. I like the experimentation with publishing, but there are aspects that I and others I’ve talked to find puzzling.

    (1) Desk rejection — i.e. immediate editorial rejection without review because your paper doesn’t seem cool enough — still exists. I’ve published one paper in eLife and had two desk-rejected in the past ~5 years.
    (2) It still costs $2000 to get your paper “published” in eLife. This is down from $3000, by the way.
    (3) I think there’s less motivation to *review* a paper for eLife now, since the review doesn’t really matter.
    (4) Combining all this, is it really that much of an advantage to just having the preprint posted? I suppose this way one knows that *someone* has read the paper!

    Nonetheless, good for eLife for trying something different.

      • Another thing to consider for reviewers is it seems an infinite number of revisions are allowed before the author decides on the final version. So if you agree to review for a paper does that mean you might be on the hook for 10 revisions?

      • Elife are providing a service that is not free. They curate scientific papers. To do this, editors (and subsequently reviewers) have to read the submitted papers in more or less detail and then make decisions (interesting vs not interesting). This curation is useful! For papers selected as “interesting”, one can then read reviewer reports that are an independent assessment of quality (the reviewers could subsequently point out flaws that make the paper “not interesting”). I don’t understand how this process could be done well for free.

        • A couple points:
          1. How do we know the editors are doing a good job selecting interesting articles? I submitted a paper to eLife which I consider to be a seminal paper in computational biology and it was desk rejected.

          2. How do we know the reviewer reports are finding flaws? I’ve been constantly finding serious errors in high profile journals for work in my field.

          To me this service of curation is similar to IMDB or Rotten Tomatoes, neither of which anyone pays for. It’s actually impossible for me to think of anyone I would pay a couple thousand dollars to just to get their opinion on one of my papers. Heck I wouldn’t even pay 10 bucks.

        • How do we know the editors are doing a good job selecting interesting articles?

          I guess we don’t – and there’s a clue in the word “interesting”. i.e. it’s subjective. No problem with that – it’s their journal. Agree tho that it’s maddening as an author to have your manuscript (MS) rejected at the submission phase.

          As I understand it eLife rejects ~ 70% of submissions in their initial triage. About half of the 30% sent for review are rejected in the review process. So around 15-ish% of submitted papers are published.

          In the new system it seems like they will still lose around 70% of submissions in the initial triage phase. Everything else is published even if it’s substandard by their own criteria (e.g. a paper can have “Incomplete” or “Inadequate” support for the interpretations made but will still be published).

          Thinking about publication charges, Up to this year the 15% of submissions received that get published are each charged £3000. Now the 30% of submissions received that get published are each charged £2000. That seems like a decent increase in revenues combined with (it remains to be seen tho) a drop in “quality”/”selectivity”. Interesting to see how this plays out but it doesn’t address the main issues with contemporary scientific publishing.

  2. In all seriousness I could see this transition being very successful for eLife. In biology we often have to officially publish our papers even when we don’t really want to (we have to show our funder demonstrative progress on a grant, or a grad student needs a publication to graduate, etc.). Currently when people are desperate to get a paper published they’ll send it to a mega journal like Plos One or Scientific Reports. As long as these reviewed preprints count as a publication in the eyes of a funder/institution, then it seems like a better route because as long it doesn’t get desk rejected you can decide to have it published no matter what the reviewers say. No additional experiments, etc. The only potential downside is if you get negative reviews those will be public, but who’s actually going to read those? And your response to the reviews will also be posted so you could always just claim their concerns aren’t valid and have the last word.

  3. I’m not getting how this would work with the whole tenure review side of academia. When I went on the job market, departments were single-mindedly focused on your publication record. It was to the point where many departments had an actual number which denoted the value for a top publication in each of the top fifty journals. In order to make tenure, you had to get a certain number of points. If everyone can get published anywhere, then what are they going to use for their metric instead? Your tap dancing ability?

    • >then what are they going to use for their metric instead? Your tap dancing ability?

      Hopefully the quality of your work, where citations and publications in so-called top journals are not used as a proxy for quality. But I’m not in academia, so I wouldn’t know.

    • If I understand your point correctly, you caution that a widespread application of eLife’s review pattern would lead to anyone being able to publish in any journal, which in turn will lead to some kind of regression towards the mean. That in turn would destroy the reputation hierarchy of journals.
      I would agree to the causality of your argumentation based on your assumptions, but I would choose different assumptions to start with.
      Your argumentation appears to omit the preview by an editor. I consider this assumption relevant because as long as there is an editor previewing the paper, there is a gate-keeper. This ‘gate-keeping process’ would obviously need to be adjusted in order ensure that the quality of the publications is high enough.
      If all journals in one field of research were to adopt this mode of review, I guess the different journals would create different entry barriers in that preview process, thus solidifying the ‘hierarchy’ among the journals. I therefore don’t believe universities would have to discard all of their metrics in determining who receives tenure.
      I’m curious if that kind of peer review is going to become more popular. In my native field of economics I have read a few things about a shift in paradigm towards rethinking how peer review can become better. In that sense I appreciate changes such as from eLife, because they promote some debate on how to improve scientific discourse!

    • Probably something much sillier, like the cumulative quality of work you’ve put out over the course of your career to that point. If only we had a convenient points system!

    • Botekin,

      Worst case ,the T&P committee might end up having to read some of your published work to decide whether you warrant tenure. What a drag that would be.

  4. You are not understanding the deal Andrew. They still have acceptance, it just different tiers of acceptance.
    They have the tiers:
    – Landmark
    – Fundamental
    – Important
    – Valuable
    – Useful

    That’s the same thing that Nature has, with “Nature”, “Nature Genetics”, “Nature Communications”.

    But instead of having different journals, they have many journals inside one journal.

    Now people will say:
    “Wow he has three e-life landmarks!”
    “Ah, he only publishes e-life usefuls.”

    So nothing changes. If anything, the hierarchy and gatekeeping becomes worse.

      • The paper can have no label, which in essence is a rejection with an implicit “useless.”

        So in sum that’s all that there is to it. The same journal has different tiers, and “rejection” is still there, it’s just a quiet rejection, where you still post the pre-print.

  5. The main (and quite small IMO) advantage of this change seems to be that it reduces the time for a “validated” (i.e. peer-reviewed) manuscript to appear online. Otherwise the approach seems a little arrogant on the part of the editors (and it doesn’t really address any of the major issues around scientific publishing).

    Normally a manuscript (MS) is submitted to a journal; there is a certain level of triage (strong to non-existent depending on the “quality” of the journal); if the MS passes triage it gets sent to reviewers; an editor makes a decision to pursue or not (i.e. reject) based on reviewers feedback and an MS that passes this test is likely to go through one (maybe two; rarely three) rounds of revision.

    It’s then published and the paper makes it’s way in the world according to the interest (and citations) it receives. The true value of a paper accrues in the weeks, months and years after publishing.

    In the new eLife process the editors/reviewers seem to wish to “define” the value of the paper from the outset through their “eLife assessment” in which they can decide that the paper is “Landmark” or merely “Useful” and so on, and the support of the interpretations “Exceptional” or “Compelling” or even “Incomplete” or “Inadequate”. What’s the point of this conceit on the part of the journal? If the paper is publishable then just publish it and let others decide whether it turns out to be a “Landmark” or whatever? And why publish papers where the supporting evidence is “Incomplete” or “Inadequate”?

    Can’t see this making any difference to scientific publishing. In this particular case I imagine that interested readers will ignore the “eLife” assessment” and treat the paper as they normally would (“is it interesting/useful to me?”). A downside might be that researchers that like eLife as a place to send their good quality papers because they like the review process and the journal’s selectivity, might decide that they’re not so interested in a journal that has chosen to lower its standards by associating itself with papers that might otherwise be rejected as having “Incomplete” or “Inadequate” support for its interpretations. I guess we’ll see how this pans out.

  6. Michael Eisen (EIC at eLife) just made his Twitter private so I can’t see what he tweeted, but it looks like this might have been a short-lived experiment. Maybe some funders don’t consider these articles to be published which limits who can use this model.

    • His twitter is public again, he claims the model is thriving but he’s been personally attacked for it:

      All his follow up tweets seem in direct conflict with his first tweet “I’m sorry. I tried.”.

      Unless he’s implying that he’s being pushed out despite the claimed success of eLife’s new direction, not sure.

      • Jordan:

        I followed the links, and . . . damn! I have no idea what’s going on there at all! I can imagine that changing a peer-review process could be controversial, but I wouldn’t have thought it would involve personal attacks.

        Biology is just such a big-budget field compared to statistics or political science. The stakes are higher, everyone has these big labs, publication in journals is such a big deal . . . a much different world than what I live in.

        • I also don’t know what’s happening obviously, but one thing I could imagine is maybe previous authors of articles at eLife are upset. eLife had advertised itself as a selective, prestigious journal, so a lot of people published there because of that. So if eLife now develops a reputation for publishing anything then it could devalue those previous publications (when looking at a resume will people distinguish in their minds eLife articles published before and after a certain date?).

          It’s hard to imagine why people who haven’t published at eLife would be upset because eLife is just one of many journals, if people don’t like their model there are tons of other journals for them to publish in.

          P.S. I had referred to this in a previous comment, but my article which was desk rejected at eLife has been invited to be reviewed at a journal with a much higher impact factor, so I am personally opposed to desk editors thinking they know which articles are important enough for their journal, which will continue in eLife’s new model. To be honest the most important factor seems to be knowing the editor of the journal.

  7. There is already enough gatekeeping at the grant level (which is far too centralized, but that is a separate issue). For a paper you only really need two labels:

    1) Do the authors describe their methods in enough detail so someone else can attempt a replication?
    2) Do the authors make any interesting predictions, ie that would be surprising if the alternatives people can come up with were true?

    The current system has resulted in failure to replicate ~80% of claims that an effect is statistically significant in a positive/negative direction. And that is a very weak criteria to begin with, with sufficient sample size you would get ~50% replication rate because there is always some kind of difference between two groups.

  8. The impact factor for PLOS One is ~4. It would be closer to 1 except for a few papers that are relatively well cited that were perhaps submitted for fast publication due to competition concerns. This plan is in effect to turn eLife into a version of PLOS One that however will be more work for Editors, readers, and reviewers. I would expect that these plans will make the impact factor of eLife less than half of its current value fairly quickly. In essence lots more work for everyone including readers and no quality control. A sad end for the aspirations of scientific quality as a diver for eLife.

    • What extra work for the reader?

      We shouldn’t be uncritically reading the papers just because they were peer reviewed. The only thing that has ever been shown to do is enforce the status quo by stifling observations or ideas that are inconvenient to whatever standard narrative. In fact, peer review probably makes it more difficult to interpret a paper because you know it has gone through that filter. Ie, findings with low prior probability are actually more reliable than those with high because they’ve undergone more stringent selection.

      Then there is the problem shown by the cancer replication project that no one can replicate ~50% of what gets published even *in principle*.

Leave a Reply

Your email address will not be published. Required fields are marked *