When doing scientific replication or criticism, collaboration with the original authors is fine but I don’t think it should be a requirement or even an expectation

Dominik Papies points me to this article, “Matched-Names Analysis Reveals No Evidence of Name-Meaning Effects,” by psychologist and data detective Uri Simonsohn, in collaboration with Raphael Silberzahn and Eric Luis Uhlmann, the two authors of an earlier study that this new report is refuting.

Papies writes:

This seems to me an interesting case where a piece of research is criticized and the critics and the original authors resolve the issue together in a follow-up article. This may fit into you thoughts and writing on how to deal with method disputes.

I do think this article is notable, and Silberzahn and Uhlmann are admirable in accepting that they might have made a mistake in their earlier publication, an attitude that is disturbingly rare, not just in academic psychology but in journalism as well, as we’ve discussed often enough in this space during the past few years.

That said, I want to hold on to the hard line that, while this sort of collaboration is excellent when doing a criticism, reanalysis, or replication, and I think this sort of collaboration should be encouraged, I don’t think it should be considered as a requirement or even an expectation in such interactions. (Sorry for all the italics; I think I’ve been malignly influenced by too much reading of Dan Kahan.)

It should be clear to all (I assume) why the collaboration exemplified by Silberzahn, Simonsohn, and Uhlmann is a good thing. The reason I still don’t think such collaboration should be an expectation or a requirement is, simply, cost. I think that criticism is important enough that it should not be burdened with a necessary link to collaboration.

One way to see this is to consider the pre-publication review process. Referees and journal editors are allowed—indeed, expected—to offer methodological and substantive criticisms without the expectation that they contact the authors, visit the authors’ labs, etc. I think the same should hold for post-publication criticism. When a collaboration can be done, as in the example above (or in a recent example of my own, where a prominent survey researcher offered a bunch of helpful comments on a forthcoming paper of ours, fortunately giving us the comments in time for us to alter the paper before publication), that’s great, and it should happen more often, and it should be encouraged. Just not required.

14 thoughts on “When doing scientific replication or criticism, collaboration with the original authors is fine but I don’t think it should be a requirement or even an expectation

  1. My understanding of the reason for the proposed etiquette that replicators collaborate with the original authors is that replication can affect the original authors’ reputations; in that case, comparison to pre-publication peer review is not exact because pre-publication peer review has a limited or no effect on the original authors’ reputations: the criticism in pre-publication peer review is essentially private and made before the final version of a paper is published, so that authors are aware of and have an opportunity to address any criticism.

    Fairness regarding reputations does not require that replicators collaborate with the original authors regarding a single replication because even a failed replication should not harm the original authors’ reputations. I think that the call for collaboration in replication is due to the perception that some people erroneously think that a single failed replication harms or should harm the original authors’ reputations; however, the solution is to correct the idea that a single failed replication harms or should harm the original authors’ reputations.

    I am more ambivalent about expecting replicators to invite the original authors to observe a replication for cases in which a failed replication might cause a correct downgrading of the original authors’ reputations, such as if there was already a failed replication on record or if the replicators intended to replicate the entire corpus of the original authors.

    • LJ:

      I disagree with your statement that “pre-publication peer review has a limited or no effect on the original authors’ reputations . . . authors are aware of and have an opportunity to address any criticism.” Pre-publication review is often a rejection and authors have no opportunity to address the criticism. And it can have a huge effect on an author’s reputation. The difference between getting a paper in a top journal, or not, can be a big deal, especially for an author who is at a low or middling-ranked institution where one top publication can count for a lot.

      • That’s a good point, Andrew, so I’ll need to cabin my remarks.

        The potential reputational harm from not publishing in top journals is real but is different from the potential reputational harm caused by a series of failed replications. Not publishing in top journals might mean that the research community does not deem a researcher’s work important, but — as far as I can tell — signals nothing about the researcher’s technical competence or integrity; however, a series of failed replications might cause legitimate concern about a researcher’s technical competence or integrity.

        I don’t think that it is realistic to expect — or want — reviewer-editor-author collaboration in pre-publication peer review to correct reviewer or editor perceptions about whether a paper deserves to be published in a top journal; however, I do think that it is realistic to expect — and want — replicator-original author collaboration — in the sense of the original authors observing the process — to correct or at least identify differences between the process of the original study and the process of the replication, at least for cases in which the replicators plan to claim that their process was the exact same process as in the original study.

      • I agree with @L J Sigerell on this issue of reputation although on the larger issue I’m with Andrew: I don’t think collaboration ought to be a requirement.

        Andrew, you are conflating an error of commission with an error of omission. The notoriety of , say, a conviction isn’t the same as that of the lost potential glory of an unwon prize. A retracted paper or allegations of non-replications have a *hugely* greater impact than say a guy submitting to Nature / Science and being rejected by them.

        For one, a retraction or non-replication has almost universal reach, whereas a rejection by one journal is usually not evident to anyone outside of that Journal and the authors.

        • For what it’s worth, I agree that replicators should not be required to collaborate with the original authors; I also think that there should be no expectation of collaboration for conceptual replications. But I do think that there might be situations in which replicators should be expected — but not required — to at least invite the original authors to observe a direct replication; not “expected” in the sense of “morally obligated” or “considered unethical if not done” but only “expected” in the sense that it would be fair to wonder why someone would conduct a direct replication without at least involving the original authors.

  2. If cost is the concern, though, then the collaboration consideration should be based on the expected cost of collaborating vs. not collaborating. For example, when setting up a replication experiment, if you find you have to make decisions about experimental setup that weren’t documented in the original work, you should probably ask for clarification before proceeding. The alternative is to make a decision that risks invalidating the comparison. This could still prove useful, once the difference is uncovered, but that will require after-the-fact collaboration of a sort anyway, often far less amicably.

    There’s also a cost to the publishers and the community… having to wait weeks to read back-and-forth arguments, and having to peer-review each iteration seems costly, to the point that I’d say at least an attempt to collaborate seems like it could reduce some costs. The increase in civility that the cited example gave is another laudable benefit.

    So while I agree that collaboration can be onerous, I’m not sure the costs outweigh the benefits. On the other hand of course requiring it would prove impossible in many cases due to sheer logistics. Can we come up with a non-requirement incentive system? Nothing obvious comes to mind… reduced publishing rates or faster peer review for rebuttal papers where the original author is a co-author on the rebuttal come to mind, but drawbacks abound. Something to make it more attractive to avoid the back and forth that non-collaboration can engender. Just a thought.

    • Chip:

      Another issue is that there can be value in simple criticism, which can be much less costly than replication. I certainly don’t see any need for a requirement or norm that critics must collaborate with the original authors. A published paper is in the public record and I think it is very appropriate for criticism to also be based on this public record. If there is something important in a research project that is not in the public record, and this is made clear by a critic, then I think the most appropriate response for the original authors is not to complain or to fight but rather to thank the critic for pointing out a gap in the publication, and then for the original authors to take the opportunity to correct that record so that future researchers will better understand the study that was done.

  3. I take a middle line: I don’t think collaboration ought to be required. OTOH I think it ought to become standard practice in such cases for the original authors to be sent the rebuttal manuscript by the editors of the Journal involved and then their comments solicited.

    The original authors, of course, get no veto nor are they even obliged to respond. But it gives them a fair chance to put their side of the story before the reviewers and then the reviewers can take the right call about the follow up paper.

    I think this makes the review process fairer, and gives a chance to clear up some misunderstandings pre-publication.

  4. I’m sure it’s no fun to be “non-replicated,” but to a non-academic, it sounds odd to worry about the reputation of the original author. Isn’t this the business you have chosen? (Puzo & Coppola 1974) P values and all that? Leaving aside whether a replication is any good–which seems to me to be a separate issue, with more at stake anyway for the replicator than the original author–if you have one paper that rejects the null and one that doesn’t, well, that’s bound to happen a nontrivial percentage of the time, so we should run the thing again, right? I guess I’m continually surprised that results published once (and, once published) are given so much weight.

    • As an academic who “grew up” in mathematics, I’m inclined to agree with Kyle. When I was young (the early 1970’s), we routinely sent “preprints” of papers to colleagues we thought might be interested in the work. (Nowadays the arXiv makes that easier.) The preprint usually had “submitted to journal X” written on it. If someone read the paper and found a mistake, they would write you a polite note, and you could withdraw it before it got published, maybe patching up the mistake and resubmitting it, or maybe ending up with a joint paper with the person who noticed the mistake and was able to patch it up. Sometimes you’d get a preprint from someone else a while later referencing your paper as “to appear.” Sometimes you’d find out that someone else had just proved the same result, so you might write up your results together. Very few people were nasty. But you were accountable for the quality of your work — that was something that you learned as an undergraduate. Sometimes the person who first proves a result uses a method that almost no-one can follow readily. Giving a more understandable proof is considered a legitimate contribution to the literature.
      Contrasting with the biology discussion groups I attend now, there is nowhere near as much pre-publication dissemination, but usually people will send a copy of a paper not yet published when requested, with a request not to send on to anyone else without consent of the author(s). But in biology, there is often disagreement about results — methodology is (often of necessity) often ad hoc, but sometimes people use “standard” methods that don’t really fit, so there is room for criticism of methodology. But again, there is little nastiness.
      But some of what I have been reading about psychology replications is really sad — so much nastiness.

  5. Pingback: Psychology News Round-Up (August 8th) | Character and Context

    • Chris:

      I agree with what you and Meyer wrote, and I remain annoyed that researchers seem to not even consider the possibility that their original findings are artifacts of their data and not general statements about reality. People seem to think that the combination of “p less than 0.05” and “publication in a top journal” is enough to give a claim some special status.

Leave a Reply to Psychology News Round-Up (August 8th) | Character and Context Cancel reply

Your email address will not be published. Required fields are marked *