“How does peer review shape science?”

In a paper subtitled, “A simulation study of editors, reviewers, and the scientific publication process,” political scientist Justin Esarey writes:

Under any system I study, a majority of accepted papers will be evaluated by the average reader as not meeting the standards of the journal. Moreover, all systems allow random chance to play a strong role in the acceptance decision. Heterogeneous reviewer and reader standards for scientific quality drive both results.

He concludes:

A peer review system with an active editor (who uses desk rejection before review and does not rely strictly on reviewer votes to make decisions) can mitigate some of these effects.

This seems reasonable to me. As a reviewer, I give my recommendation but I recognize that the decision is up to the editor. This takes the pressure off me: I feel that all I have to do is provide useful information, not to make the decision.

Esarey’s paper also includes some graphs which are pretty good, but I won’t include them here because I’m so bummed that he doesn’t label the lines directly—I can’t stand having to go back and forth between the lines and the legend. Also I don’t like graphs where the y-axis represents probability but the axis goes below 0 and above 1. It’s all there, though, at the link.

I like the paper. I haven’t read the details so I can’t comment on Esarey’s specific models, but the general features seem to make sense, so it seems like a good start in any case.

16 thoughts on ““How does peer review shape science?”

    • I fully agree with this. The data elements shouldn’t collide with the edges, so I want a little room. That being said, I could see for something that’s bounded between a and b (or 0, 1 in the specific case of probability) it could be useful to have light-grey horizontal lines (or something equally low-key) at the boundary values so that at any point on the graph you can see exactly where the value is relative to the boundaries.

        • I like those gray lines. I was thinking that the problem with using the line showing the scale of x and the line indicating 0 be the same is that it would be weird to have the 0 values foregrounded on the scale line. I use crime data too and it is really important to know what is 0 or near 0 and what is empty. You are not really showing Y of less than 0, you are really separating the scale from the 0.

  1. The strong role of random chance seems unacceptable, especially given the typically long review times. It cannot benefit science to have 6 month review times where the outcome is in no small part random.

    • Based on what I saw in my simulations, it’ll be hard to eliminate chance as a factor using any of the traditional institutions of peer review. We could go to the PLoS model, or back to the old model of unilateral editor acceptance, but I’m sure those institutions have problems as well. It’s an interesting puzzle…

      Great name, by the way.

  2. Isn’t a “political scientist” doing simulations of peer review the third sign of the apocalypse? The first two being ‘Econophysics’ and Economists doing endless statistical econ of Econ Journal citations?

  3. all systems allow random chance to play a strong role in the acceptance decision

    Some folks with a lot of patience and desire to “succeed” exploit it on routine basis: Everything, mo matter how mundane, gets sent to the Top 3 journals (in succession, if needed), then to the Top 10, etc. The end result is that they end up only publishing “brilliant science”.

    • Yep. journal publication and the association of *which journal* as equivalent to *how brilliant* is just broken flat out. It’s like an airplane with no wings. Lots of people are climbing around on the fuselage saying it needs just a little patch-up here and there. And then there’s those of us facepalming and shaking our heads.

      • The only reasonable alternative in my opinion is something like “archive.org” whose whole goal is simply to record/store and provide permanent reference addresses for people’s publications. let “how briliant” be a function of *CONTENT*, reviewer reputation, open advocacy, and open criticism.

Leave a Reply to Justin Esarey Cancel reply

Your email address will not be published. Required fields are marked *