An efficiency argument for post-publication review

This came up in a discussion last week:

We were talking about problems with the review process in scientific journals, and a commenter suggested that prepublication review should be more rigorous:

There are lot of statistical missteps you just can’t catch until you actually have the replication data in front of you to work with and look at. Andrew, do you think we will ever see a system implemented where you have to submit the replication code with the initial submission of the paper, rather than only upon publication (or not at all)? If reviewers had the replication files, they could catch many more of these types of arbitrary specification and fishing problems that produce erroneous results, saving the journal from the need for a correction. . . . I review papers all the time and sometimes I suspect there might be something weird going on in the data, but without the data itself I often just have to take the author(s) word for it that when they say they do X, they actually did X, etc. . . . then bad science gets through and people can only catch the mistakes post-publication, triggering all this bs from journals about not publishing corrections.

I responded that, no, I don’t think that beefing up prepublication review is a good idea:

As a reviewer I am not going to want to spend the time finding flaws in a submitted paper. I’ve always been told that it is the author, not the journal, who is responsible for the correctness of the claims. As a reviewer, I will, however, write that the paper does not give enough information and I can’t figure out what it’s doing.

Ultimately I think the only only only solution here is post-publication review. The advantage of post-publication review is that its resources are channeled to the more important cases: papers on important topics (such as Reinhart and Rogoff) or papers that get lots of publicity (such as power pose). In contrast, with regular journal submission, every paper gets reviewed, and it would be a huge waste of effort for all these papers to be carefully scrutinized. We have better things to do.

This is an efficiency argument. Reviewing resources are limited (recall that millions of scientific papers are published each year) so it makes sense to devote them to work that people care about.

And, remember, the problem with peer review is the peers.

25 thoughts on “An efficiency argument for post-publication review

  1. There’s PubPeer (https://pubpeer.com/). I’ve found this a helpful resource, and even contributed. It would be helpful if journals linked articles directly to it, as currently you need to know to look. I like that PubPeer requires that you demonstrate that you are a published author yourself before it will let you post (vaguely recalling that you must have been first or final author), so it has something of a filtering effect in terms of the quality of posts.

    What I’d really like to see is something like a numerical rating system tied into post-publication review, and to have this tied to career incentives, with the goal of incentivizing investigators to do better research, along with appropriate analyses and interpretations.

    • ” I like that PubPeer requires that you demonstrate that you are a published author yourself before it will let you post (vaguely recalling that you must have been first or final author),”

      As I recall you can also post reviews without this — but they are noted under some different heading.

      “What I’d really like to see is something like a numerical rating system tied into post-publication review”

      This could be tricky — I can imagine it easily going astray.

    • Garnett:

      I think all reviews should be published, and I guess it makes sense for reviews to be anonymous in some cases and not in others. My point above was that it’s logical for reviews to be after, rather than before, publication, because then reviewing resources can be more efficiently allocated to papers that are more important or that are getting more attention.

      • I was thinking that reviewers might be ‘incentivized’ to provide good reviews if they were ‘published’ along with the paper.

        A post-publication review will need to be published to make it appealing, but where? In a later issue of the same journal? How is that different from a ‘Letter to the Editor’?

        • Garnett:

          I think the reviews should be published in the same (online) place as the original article. And I agree with you about the incentives. It should be ok to review anonymously but it should also be ok for the review author to be published.

        • What about this for a radical idea:

          1) all reviews should be published alongside the original work with the name of the reviewers.

          2) if the paper is published and the reviewer endorsed this decision, reviewers could now put the name of the paper on their CV under the new to be added section “Reviewed papers”.

          3) on their CV, over time, they can update the number of citations the reviewed papers received. Listing the number of citations could be seen as representing the “quality” of the paper the reviewer helped produce, and hence reflect on the reviewer. As as such, it could even become a criterion for evaluating researchers and could even be turned into a metric in some way or form.

          This all could:

          a) increase the overall willingness to review papers.
          b) decrease the willingness to endorse a “bad” paper because your name is linked to it.
          c) lead to better reviews, and quality of published papers.

        • I don’t know what “review spam” is, but if it’s something like e-mail spam i don’t see how that would lead to it.

          This is because i presume 1) the amount of publications in need for review will stay the same as it is now, and 2) i don’t see the total amount of publications in need for review will be distributed differently among the reviewers then it is now.

          The idea probably sucks though, i just haven’t figured out why it would yet. The only think i can think of is that designing an index for reviewing, somehow using the citation numbers of the reviewed papers, could somehow lead to unwanted things like approving manuscripts with “hip” topics that are likely to get lots of citations. On the other hand, you don’t want your name attached to what turns out to be a “bad” paper in several years time, when the “hipness” wears off. I think this trade-off could lead to a nice balance.

        • A list of suggestions I came up with:

          (a) Pay reviewers generously for their time. When a funding agency can spend $1,000,000 on a project, surely it can spend a few thousand to compensate reviewers

          (b) Reveal which Professors signed off on a particular paper as reviewers. At least when things go wrong.

          (c) On every project funded keep aside a fraction of the funding money for independent, paid replications.

          (d) Make pre registration mandatory for a certain class of articles.

          (e) No acceptance of a paper for review by a Journal before code & data is posted to an online repository

          (f) Fund arxiv to expand its coverage to all subject areas. Upgrading the arxiv interface to allow commenting on articles

          (g) Get more outside-of-subject experts to comment on funding proposals & reviews.

          (i) Force authors to clearly distinguish whether their work is exploratory or confirmatory.

          (j) Reduce blind reliance of career progress & funding committees on publication number metrics & impact factors

          (k) Journals must appoint a panel of paid methodological experts to vet every paper for routine protocol related technicalities e.g. statisticians, econometricians, code reviewers, image analysts, analytical chemists etc.

          (l) Random, post hoc audits of a certain fraction published papers where a detailed analysis to the level of verifying lab notebooks, visiting the site, checking survey questionnaires etc. is conducted

          (m) Force raw data e.g. survey response forms, data dumps, raw data disks etc. to be stored forever at a secure central repository.

          (n) Universities to discipline their PR teams who issue hyped up news releases with little basis in the actual results

          (o) Professional associations of doctors / psychologists / poli sci etc to stop staying mum about absolutely crappy results / overhyped results.

          But I’m not optimistic things will change because finally it comes to motivations. For most published articles, no one cares.

        • +1 for (c), which I advocated here http://blog.f1000research.com/2016/09/12/moving-to-opportunity-challenging-an-analysis-of-poverty-opportunity-and-ptsd/

          “A large amount of effort on the part of Dr Kessler’s team and the National Bureau of Economic Research (NBER) was necessary to make our reproduction and reanalysis possible. Such effort should probably be designed-in to all publicly funded research, with the contracts specifically allocating funds for open data and open code, and for independent reanalysis by third parties.”

  2. Interesting idea, but we can make pre-publication review more efficient if journals only send reviewers one section at a time. If we sign off on the intro and lit review and hypotheses as compelling, we receive the method section. If we sign off on the method, we receive the results, etc. That way we screen out the duds along the way and only do a full review of serious contenders. Do any journals already do this?

    • I haven’t heard of anything exactly like this, but Registered Reports (https://cos.io/rr/) are a new type of submission that some journals are accepting. In Registered Reports, the intro, proposed method, and planned analyses are reviewed before data are collected. This gives editors/reviewers a chance to suggest changes to the method, and it allows for results-blind evaluation (typically papers that pass the initial stage of review are given “in principle acceptance” so that only studies that would be worth publishing regardless of the outcome are approved, and they are in fact published regardless of outcome).

  3. This reminds me of the difference (in another context) between “hard to break” and “easy to fix”. The current system *wants* to be hard to break (making sure no bad papers get through) but in practice it doesn’t work so well. Your suggestion is closer to the easy-to-fix idea, namely that we cannot catch all the mistakes pre-publication, so we should make sure we have good post-publication review.

    I am more favorable to peer review than you are (pre-pub) but I agree with you that it is wrong to consider it the final word. Peer review should be both pre- and post-. Pre- will lead to rejecting papers that are obviously wrong (well…) or simply not original, while post- will allow researchers to catch more subtle mistakes, or to conduct replications etc.

    We need both! And yet people are so sensitive and get upset when you raise the possibility of errors, post-publication…

  4. I have mentioned this idea before, but I’ll repeat it anyway. There are many potential improvements to the publication process and invariably we will need to balance a number of potentially conflicting objectives. Here is my idea – which I still find provides a good balance:

    Manuscripts submitted to a journal undergo an initial review by the editors to screen whether the idea is worth people’s attention in that journal – not whether it is correct. Being “accepted” at this stage is something you can put on a resume, although it “counts” less than making it through subsequent stages of review. Once this initial “acceptance” stage is passed, the manuscript (with data) is published electronically for review. Reviewers can be anonymous or not – it is up to the reviewer to decide. After a period of time (for example 3-6 months) the manuscript is closed to further reviews and the editors make a decision regarding whether the article is accepted for final publication in the official journal (which may also be in print form), whether it should be revised, or whether the comments received warrant the article to not be considered further. Naturally, acceptance into the final journal “counts” for more on your resume. Once finally published, the article should be open to further comments online so that it is not really “final” but part of an ongoing process of building knowledge.

    I like many aspects of this proposal and I think it fits with the practical necessity of academic and research careers – plenty of opportunities to get “credit” for doing interesting work, for it passing various stages of review, for reviewers to take credit for reviews if they wish, etc. The real implementation hurdle will be to get editors who are both recognized researchers and willing to break down the research hierarchy that has helped get them where they already are. Plus – and this is a big hurdle – they must be willing to expend the increased effort that my process requires of them.

  5. Can we also have more statistical review? I have seen papers say there is nothing to review that are now listed on Retraction Watch for misuse of statistics, among other serious errors. Also they say nearly everything is underpowered.

    Could requiring statistical review for as a general standard, done say around the time of ethics review, improve such problems?

    • Anon:

      As always, the problem with pre-publication review is limited resources. If pre-publication statistics review is anything like pre-publication ethics review, it really will just be a bunch of hoops to jump through. The point of doing review, statistical or otherwise, in the post-publication is that these resources can be used more effectively.

Leave a Reply

Your email address will not be published. Required fields are marked *