No, it’s not a “witch hunt”

In a story in Retraction Watch entitled “Rejection overruled, retraction ensues when annoyed reviewer does deep dive into data,” Adam Marcus reports:

As a prominent criminologist, Kim Rossmo often gets asked to review manuscripts. So it was that he found himself reviewing a meta-analysis by a pair of Dutch researchers — Wim Bernasco and Remco van Dijke . . . The paper, for Crime Science, analyzed 33 studies . . .

Rossmo found problems with the paper; in his words:

I [Rossmo] began by reading the 33 articles that comprised the basis of the systematic review. When I did so, however, I discovered different results from what was reported in the systematic review (e.g., simulated data, mixed findings, opposite results, violations of the posted selection criteria, etc.). . . . I wrote up a full replication report, concluded the article should be retracted, and sent it to the editor and the first author. This time the author said he would take another look at their work. A couple of months later, he responded that he found even greater discrepancies and agreed the paper should be retracted. Apparently, all the coding for the systematic review was done by the second author. The journal editors also agreed to the retraction.

This is a story with a happy ending because the author of the original article accepted the critique; he did not dodge it or fight it; his loyalty was to science and to society, not to his publication record.

From the retraction notice:

The authors attempted to replicate their own findings by re-assessing the 33 publications. Based on the results, they concluded that Professor Rossmo’s concern was fully justified. . . . The authors apologize to the readers and the Editors of Crime Science for any problems caused by drawing conclusions not sufficiently supported by evidence, and they thank Professor Rossmo for bringing the issue to their attention. . . . Both authors agree to the retraction.

When do you ever see that? This should be how researchers always behave when outsiders point out errors in their work, but unfortunately this clean admission of error is rare.

And note that I said “admission of error,” not “admission of guilt.” Everyone makes mistakes. We learn from our mistakes, but only when we are willing to learn. So I appreciate Bernasco and van Dijke for doing the right thing.

Just one thing

After the episode was over, the first author of the paper wrote:

Finding out and having to admit that we as authors had made errors was unpleasant, but the way it was handled left us room to make up our minds, and time to get back to the data and attempt to replicate our own work. The ‘whistleblower’ did not make his concerns public but contacted us and the editors of Crime Science about his inability to replicate our findings. This was painful, but becoming the target of a public debate would have been much worse.

This could also be read as an advice to those who identify errors (or worse) in the work of others: Do not immediately put the authors on the stand in public, but contact them and the journal editors first. The self-correcting power of science should not become a witch-hunt.

This could be good advice. I guess it depends on the context. The authors of this article responded quickly and appropriately to the criticism, and it all went well. In many other cases, authors simply ignore criticism, or they brush it aside, or they start coming up with alternative stories as a way of avoiding the issue. Sometimes I think it will be effective to contact the journal and the authors directly; other times this will just delay things. Recall that the pizzagate guy was being contacted by people for several years and managed to never address any of the concerns; it was only after public exposure that anything got done, and even then it took awhile.

Also, I think the author’s use of the term “witch hunt” is inappropriate. There is no such thing as a witch (as defined as someone who can actually do supernatural things); thus a “witch hunt” refers to a setting in which people are persecuted for something they did not do and could not have done.

In contrast, scientific errors are real. In addition, publicly writing about scientific errors is just the sharing of information. The original article with all its errors was out there in public, presumably because some readers would find it useful. These readers would be well served by finding out as soon as possible that the research is in error.

What I’m saying here is that, ultimately, this research is not being done for the benefit of author or the journal; it’s for the benefit of the readers of the journal, and ultimately for society. If you don’t want your work to be publicly discussed, you shouldn’t publish it. We make criticisms in public for the same reason that we write articles for publication, because we think this work is ultimately of relevance to people on the outside.

7 thoughts on “No, it’s not a “witch hunt”

  1. It’s as if the first author is saying, “If you want to criticize a published article, you should have the foresight to have reviewed it, the fortitude to be prominent in your field, the luck that the errors weren’t due to the first author, and the ability to rerun the identical study, with the exact same subjects, immediately and without funding.”

  2. This is a great case study and should figure prominently in terms of how to handle errors. We can all catch errors or make suggestions on each other’s work if we are in the circle of colleagues who have seen the preprint. And many times this leads to a nice acknowledgment. In some sense the peer review need not be so far removed from this process. It is a careful audit of the research process. Sometimes this can lead to stronger or more refined results and sometimes it can lead to an outcome where the results are found to be in error. If this is all handled with sensitivity and collegiality, it is all likely to go well.

  3. I don’t know what “The self-correcting power of science” is, would love to see a description.

    I am with Andrew on having an issue with the idea of using kid gloves with errors in published work. If there is an expectation that errors will be dealt “with sensitivity and collegiality” there will be less of an incentive to use care (such as the primary author showing curiosity about the secondary author’s work) before publishing results to the general public – that’s what publishing means. Some public embarrassment is a reasonable consequence for shoddy work, and is a good incentive to keep people honest to boot.

    How bad the embarrassment gets really depends on how the errant deals with the criticism. Honesty, humility, and sharing the data would blunt much of it.

    • I really appreciate this. I found a similar error in a highly publicized article that resulted in findings that were entirely untrue. Unfortunately. I brought this to the attention of the authors and the editors (and the people at the university who did the press releases), and nobody could have cared less. So I had to write a commentary and the authors stood their ground. Whatever – it is clear who is motivated by citations / paper count and who is not these days.

  4. “There is no such thing as a witch (as defined as someone who can actually do supernatural things); thus a “witch hunt” refers to a setting in which people are persecuted for something they did not do and could not have done.”

    That is obtuse. The people who conducted witch hunts in Massachettes in the 1700s thought there were real witches. And they stopped the witch hunts after the excesses of Salem because they saw the same problem with them as our time does, not because they decided witches didn’t exist.

    The problem with a witch hunt is that there is no means to defend against the charge, at least none in proportion to the community’s prejudice to convict and punish. Another is that real motive of the witch hunt is to increase the hunter’s social status, out of proportion to any actual threat.

    Janet Reno’s actions as AG against satan-worshipping child abusers constituted a witch hunt, even though there surely were a few actual examples.

  5. A couple of side issues:

    What happened during the peer review process? They didn’t find any problems?

    Did the authors explain what happened? It seems hard to write code that does all these things haphazardly: “simulated data, mixed findings, opposite results, violations of the posted selection criteria, etc.”

    • If you read the original retraction watch, Kim did say he recommended rejection. Only after it was published and he started writing a rebuttal did he do more sleuthing to find the more serious violations.

      Do you review papers Kaiser? In social sciences it is pretty rare for folks to have code, so it typically isn’t even possible to review. When I have provided code in my papers over the years only a few folks have looked at it and made comments. It is a bit more work than I would say typically normative/expected for peer reviewers in the social sciences I am aware of. (IIRC Gelman says he only spends a few minutes reviewing papers high level to give feedback?)

      I think it would be a good idea for journals to hire something akin to fact checkers/code checkers. Some journals having replication requirements are doing this now, and it is quite a bit of work.

Leave a Reply

Your email address will not be published. Required fields are marked *