How should journals handle replication studies?

Sanjay Srivastava reports:

Recently Ben Goldacre wrote about a group of researchers (Stuart Ritchie, Chris French, and Richard Wiseman) whose null replication of 3 experiments from the infamous Bem ESP paper was rejected by JPSP – the same journal that published Bem’s paper.

Srivastava recognizes that JPSP does not usually publish replications but this is a different story because it’s an anti-replication.

Here’s the paradox:

– From a scientific point of view, the Ritchie et al. results are boring. To find out that there’s no evidence for ESP . . . that adds essentially zero to our scientific understanding. What next, a paper demonstrating that pigeons can fly higher than chickens? Maybe an article in the Journal of the Materials Research Society demonstrating that diamonds can scratch marble but not the reverse??

– But from a science-communication perspective, the null replication is a big deal because it adds credence to my hypothesis that the earlier ESP claims arose from some sort of measurement error (which might be of interest for people doing straight psychology experiments using similar methods).

The rules of journal publication are all about scientific progress, but scientific journals are plugged into the news media, where the rules are different. My guess is that the JPSP editors thought the original Bem article was not real science even when they accepted it for publication, but they wanted to be open-minded and bend over backward to be fair. Sort of like what happened when Statistical Science published that notorious Bible Code paper back in 1994.

4 thoughts on “How should journals handle replication studies?

  1. I can understand a journal not wanting to publish these kinds of studies in paper format, but I don't understand why they don't develop an area on their website where these studies can live. This is cheap, easily doable and–much more so than 'merely' publishing original research–more akin to the proper role of science as a methodological process where claims are put up for falsification. Paper Journal = Claims Area; Website = Discussion/Replication/Falsification Area.

  2. One of my first papers was a replication study that failed to replicate a very implausible association. I guess I got lucky: the previous positive finding enabled me to be published on my first attempt (quite a rare outcome, in my experience).

    I also think that a failure to replicate is really useful. Not only can it illuminate measurement error or point out chance findings, but it might also suggest that the key variables are not what we think that they are. If we find an association between A and B in population X, but not in population Y, then maybe C is the real culprit (associated with A in populaiton X but not in population Y).

    So perhaps it is my own history that gives me a partisan opinion, but I'd prefer a world where the failure to replicate could be easily sent to the same journal.

  3. I'm sure this (and the stuff about overestimating small effects) is part of a much, much larger discussion about publishing, but maybe part of the problem comes from conflating three different properties of scientific work:

    (1) This work is potentially news-worthy.
    (2) This work should become part of the "scientific record."
    (3) This is work that the author should be recognized/lauded for.

    It seems like getting your work into most journals requires the right combination of all three, but they're all important (and potentially separate).

  4. The journal Judgment and Decision Making, which I edit, has a policy of publishing "attempted replications of surprising results". The upshot is that, for better or worse, we have become the place to publish failures to replicate studies claiming that unconscious thought beats conscious thought in judgment and choice. The difficult question now is when to stop. But this seems to me to be a reasonable policy for other journals.

Comments are closed.