Skip to content

More proposals to reform the peer-review system

Chris Said points us to two proposals to fix the system for reviewing scientific papers. Both the proposals are focused on biological research.

Said writes:

The growing problems with scientific research are by now well known: Many results in the top journals are cherry picked, methodological weaknesses and other important caveats are often swept under the rug, and a large fraction of findings cannot be replicated. In some rare cases, there is even outright fraud. This waste of resources is unfair to the general public that pays for most of the research. . . .

Scientists have known about these problems for decades, and there have been several well-intentioned efforts to fix them. The Journal of Articles in Support of the Null Hypothesis (JASNH) is specifically dedicated to null results. . . . Simmons and colleagues (2011) have proposed lists of regulations for other journals to enforce, including minimum sample sizes and requirements for the disclosure of all variables and analyses.

As well-intentioned as these important, necessary first steps may be, they have all failed to catch on.

Before getting to Said’s suggestion, let me interject that, yes, I agree these are important problems, but I don’t like the idea of a “Journal of Articles in Support of the Null Hypothesis.” I think the whole null hypothesis thing is a bad idea. I understand the appeal of seeing whether a pattern can be explained merely by chance, but let’s not go overboard here: in lots of examples of biological and social sciences, the null hypothesis can’t be true. The issue isn’t “support of the null hypothesis” so much as inferential uncertainty.

Said continues:

Granting agencies should reward scientists who publish in journals that have acceptance criteria that are aligned with good science. In particular, the agencies should favor journals that devote special sections to replications, including failures to replicate. . . . I would like to see some preference given to fully “outcome-unbiased” journals that make decisions based on the quality of the experimental design and the importance of the scientific question, not the outcome of the experiment. This type of policy naturally eliminates the temptation to manipulate data towards desired outcomes.

The recommendations seem reasonable but I disagree with the claim that, if implemented, they would “eliminates the temptation to manipulate data towards desired outcomes.” We’re doing science here, we want to make discoveries! Of course there is a temptation to find what we want to find. I agree that if publication in the “tabloids” and grant funding aren’t on the line, the temptations are lower, but they’re still there.

Said also points to this proposal by Niko Kriegeskorte for open post-publication peer review. I don’t quite see how it works but it’s probably a good idea, sort of like my modification of Larry Wasserman’s idea mentioned in our previous post, where I note that if all our publication shifts to Arxiv-like repositories, the defunct journals can retool as lists of recommended reading.

P.S. Said also writes that, “in the current system, the only signal of a paper’s quality is the journal’s impact factor.” I don’t see this at all. Here are some other signals:
– Journal quality. Impact factor != quality. For example, mediocre biology journals have higher impact factors than top statistics journals.
– Citation counts. With Google you can start counting citations right away. Citations are no guarantee of quality either, but they are another signal, not the same as the journal’s impact factor.
– The authors. All else equal, I’d expect a paper from a recognized lab to be taken more seriously than a paper from nowhere.
– And, of course, the paper itself, starting with the title and abstract.

That’s a lot of signals right there (see also also the many different ratings collated here), and I’m probably forgetting a few more.


As I wrote in the previous post, I’m glad to see people thinking about reforms. I made the comments above not to shoot down the ideas of Said and Kriegeskorte but rather to explore some complications.

The current system has obvious problems; one result of this is almost anything can seem like a good solution. It’s sort of like education reform: choose Back to Basics, or Student-Centered Learning, or whatever: any of these ideas could be good, but it depends on their implementations. In any case, Said is probably right that funders could push for a lot of changes. Sort of like what William F. Buckley said about college education in the 1950s.


  1. Phil says:

    The idea of a journal in which the reviewers don’t take the results into account is intriguing. One can even imagine that for some papers the reviewers could be blinded to the results, although this wouldn’t work for most papers. Right now a question on a lot of reviewer forms, and one that I imagine gets a lot of weight from the editors, is “are the results interesting” or “are the results worth publishing” or similar; I’d never previously thought about how harmful that can be.

  2. Chris Said says:

    Hi Andrew – 
    Thanks for posting!

    You’re right — I probably came on too strong when I said my proposal would “eliminate” the  the temptation to manipulate data. But it sounds like we both agree the temptation would be “lower”. We need granting agencies to help us implement these ideas. They are not perfect, but they are better than the status quo.

    Also, you’re right that there are currently more signals about a paper than just the impact factor. But the available signals are still nowhere close to enough. There are hundreds of papers by reputable authors and published in reputable journals that contain serious confounds or cannot be replicated. And the failures-to-replicate are rarely published. I often only find out by word-of-mouth. There should be post-publication comments and ratings to alert me to these issues.

    Finally, I completely support science as inference instead of NHST, although that’s a separate fight. I have to work in a NHST world. Reducing the bias towards significant results will naturally lead to more accurate estimates.

    • K? O'Rourke says:

      As for quality signals, there are many but

      “It appears that ‘quality’ (whatever leads to more valid results) is of fairly high dimension and possibly non‐additive and nonlinear, and that quality dimensions are highly application‐specific and hard to measure from published information.”


      But then prevention is perhaps even harder than cure here…

  3. John Mashey says:

    Given that the world is online … I think there would be a significant value-add if publication software supported easy addition of *moderated* comments, attached to the online article. It’s really annoying when refutations, non-replications or significant commentary is scattered around other journals, blogs, websites, and often delayed a long time if the need is to write a peer-reviewed paper.

    It can take serious effort to figure out if
    (a) Nobody cared about a paper OR
    (b) It was slaughtered in blogs by experts, who thought it was so bad they didn’t bother to publish peer-reviewed papers about it.

    Actually (and this is along value-chain discussion elsewhere), I think such would be serious value-add for the editor/publisher, i.e., would generate additional useful content, assuming good moderation to weed out junk.

  4. DK says:

    The most comprehensive and effective peer review reform: abolish it altogether. It does not work and it never worked even to 10% of what it is claimed to accomplish. It’s a sacred cow whose only value is in serving a bunch of special interests. The sooner we do away with it and the idea that only few credentialed select are allowed to be a bearers of the ultimate truth, the better. It die a natural death in not too distant future – so why wait?

  5. Entsophy says:

    There is one idea which would be almost trivial to do and would work in some fields like Statistics. The idea comes from Clifford Truesdell and the policy he used for several journals he edited.

    Truesdell was something of a character. He was the major force in Continuum Mechanics in the 20th century at a time when Theoretical Physicists had dropped the subject. Truesdell, who was fluent in many languages, like to draw inspiration from the works of the great mathematical physicists from 1700-1900. I believe that is where he got the inspiration for his editorial policy. It came from some great journal or society in the past.

    The way it worked was that there was an editorial board of say 10-15 people. If you wanted a paper published you simply sent it to whichever board member you wanted. The board member was under no obligation to look at the paper or submit it, but if it was approved the title page would contain something like:

    “submitted by [Board Member] on behalf of [the Author]”

    So the Board Member’s name and reputation were tied to the paper. There was no other peer review, nor was there any obligation on the author or member. Board Members could refuse to submit a paper for publication for any reason at any time. If a Board Member thought a work was important, but had misgivings about it they could publish it “with reservations” detailed on the title page.

    Papers were encourage to be longer, more complete and thorough. Authors were encourage to submit material whose form and content made it of lasting value and not just to bulk up their publication numbers.

    So to do this in Statistics, get Gelman and number of well-respected Statisticians to form the initial board. It would work best if they were a diverse group, but shared a similar sense of good writing and could all recognize good work when they see it.

    Then create a simple website to publish the papers and a few simple bylaws to governing how Board Membership changes. Finally, to start the journal off with a bang, have the board members publish mini monographs on their biggest contribution or most important work. That will give it the initial quality and reputation needed for people to pay attention to it.

    • Brian says:

      That’s quite similar to how PNAS works – except articles are designated as “edited by” a member of the NAS (rather than the editorial board), and there’s still a peer review process.

  6. […] each week. At this point, even the saying No part is getting tiring. I think I’d much prefer Kriegeskorte’s system of post-publication review where whatever you write about a paper is open and available to all to […]

Where can you find the best CBD products? CBD gummies made with vegan ingredients and CBD oils that are lab tested and 100% organic? Click here.