The Javert paradox rears its ugly head

The Javert paradox is, you will recall, the following: Suppose you find a problem with published work. If you just point it out once or twice, the authors of the work are likely to do nothing. But if you really pursue the problem, then you look like a Javert. I labeled the paradox a few years ago in an article entitled, “Can You Criticize Science (or Do Science) Without Looking Like an Obsessive? Maybe Not.”

This came up recently in an email from Chuck Jackson, who pointed to this news article that went like this:

Does ocean acidification alter fish behavior? Fraud allegations create a sea of doubt . . .

[Biologist Philip] Munday has co-authored more than 250 papers and drawn scores of aspiring scientists to Townsville, a mecca of marine biology on Australia’s northeastern coast. He is best known for pioneering work on the effects of the oceans’ changing chemistry on fish, part of it carried out with Danielle Dixson, a U.S. biologist who obtained her Ph.D. under Munday’s supervision in 2012 and has since become a successful lab head at the University of Delaware . . .

In 2009, Munday and Dixson began to publish evidence that ocean acidification—a knock-on effect of the rising carbon dioxide (CO2) level in Earth’s atmosphere—has a range of striking effects on fish behavior, such as making them bolder and steering them toward chemicals produced by their predators. As one journalist covering the research put it, “Ocean acidification can mess with a fish’s mind.” The findings, included in a 2014 report from the Intergovernmental Panel on Climate Change (IPCC), could ultimately have “profound consequences for marine diversity” and fisheries, Munday and Dixson warned.

But their work has come under attack. In January 2020, a group of seven young scientists, led by fish physiologist Timothy Clark of Deakin University in Geelong, Australia, published a Nature paper reporting that in a massive, 3-year study, they didn’t see these dramatic effects of acidification on fish behavior at all. . . .

Some scientists hailed it as a stellar example of research replication that cast doubt on extraordinary claims that should have received closer scrutiny from the start. “It is by far the best environmental science paper I have read for a long time,” declared ecotoxicologist John Sumpter of Brunel University London.

Others have criticized the paper as needlessly aggressive. Although Clark and his colleagues didn’t use science’s F-word, fabrication, they did say “methodological or analytical weaknesses” might have led to irreproducible results. And many in the research community knew the seven authors take a strong interest in sloppy science and fraud—they had blown the whistle on a 2016 Science paper by another former Ph.D. student of Munday’s that was subsequently deemed fraudulent and retracted—and felt the Nature paper hinted at malfeasance. . . .

What the hell? It’s now considered “needlessly aggressive” to bring up methodological or analytical weaknesses?

Have these “Others have criticized” people never seen a referee report?

I’m really bothered by this attitude that says that, before publication, a paper can be slammed a million ways which way by anonymous reviewers. But then, once the paper has appeared and the authors are celebrities, all of a sudden it’s considered poor form to talk about its weaknesses.

The news article continues:

The seven [critics] were an “odd little bro-pocket” whose “whole point is to harm other scientists,” marine ecologist John Bruno of the University of North Carolina, Chapel Hill—who hasn’t collaborated with Dixson and Munday—tweeted in October 2020. “The cruelty is the driving force of the work.”

I have no idea what a “bro-pocket” is, and Google was no help here. The seven authors of the critical article appear to be four men and three women. I guess that makes it a “bro pocket”? If the authors had been four women and three men, maybe they would’ve been called a “coven of witches” or come up with some other insult.

In any case, this seems like a classic Javert bind. Sure, the critics get bothered by research flaws: if they weren’t bothered, they wouldn’t have put in the effort to track down all the problems!

More from the news article:

Clark and three others in the group took another, far bigger step: They asked three funders that together spent millions on Dixson’s and Munday’s work—the Australian Research Council (ARC), the U.S. National Science Foundation (NSF), and the U.S. National Institutes of Health (NIH)—to investigate possible fraud in 22 papers. . . .

Munday calls the allegations of fraud “abhorrent” and “slanderous” . . . Dixson denies making up data as well. . . . But multiple scientists and data experts unconnected to the Clark group who reviewed the case at Science’s request flagged a host of problems in the two data sets, and one of them found what he says are serious irregularities in the data for additional papers co-authored by Munday.

Also this:

Dixson, in the February interview, said she did not know about the allegations. Although she denies making up data, “There hypothetically could be an error in there,” she said, perhaps because of mistakes in transcribing the data; “I don’t know. I’m human.” . . . Clark and colleagues also found problems in the data for the 2014 paper in Nature Climate Change, which showed fish behavior is altered near natural CO2 seeps off the coast of Papua New Guinea. (Munday was the first of five authors on the study, Dixson the third.) That data set also contained several blocks of identical measurements, although far fewer than in the Science paper. . . . Munday says Dixson has recently provided him with one original data sheet for the study, which shows she made a mistake transcribing the measurements into the Excel file, explaining the largest set of duplications. “This is a simple human error, not fraud,” he says. Many other data points are similar because the methodology could yield only a limited combination of numbers, he says. Munday says he has sent Nature Climate Change an author correction but says the mistake does not affect the paper’s conclusions.

Bad data but they do not affect the paper’s conclusions, huh? We’ve heard that one before. It kinda makes you wonder why they bother collecting data at all, given that the conclusions never seem to change.

And here’s someone we’ve heard from before:

[Nicholas] Brown . . . identified problems of a different nature in two more Munday papers that had not been flagged as suspicious by the Clark team and on which Dixson was not an author. At about 20 places in a very large data file for another 2014 paper in Nature Climate Change, the raw data do not add up to total scores that appear a few columns farther to the right. And in a 2016 paper in Conservation Physiology, fractions that together should add up to exactly one often do not; instead the sum varies from 0.15 to 1.8.

Munday concedes that both data sets have problems as well, which he says are due to their first authors hand copying data into the Excel files. He says the files will be corrected and both journals notified. But Brown says the anomalies strongly suggest fabrication. No sensible scientist would calculate results manually and then enter the raw data and the totals—thousands of numbers in one case—into a spreadsheet, he says.

To him, the problems identified in the data sets also cast suspicions on the “ludicrous effect sizes” in many of the 22 papers flagged by the whistleblowers. “Suppose you’re going to the house of somebody you think may have been handling stolen televisions, and you found 22 brand new televisions in his basement, and three had serial numbers that corresponded to ones that have been stolen from shops,” Brown says. “Are you going to say, ‘Yeah, we’ll assume you’ve got the purchase receipts for the other 19?’”

OK, now we’re getting rude. If talking about “methodological or analytical weaknesses” is needlessly aggressive, what is it when you liken someone to a thief of television sets?

Back to Javert

I have not looked into the details of this case. It could be that one or more authors of those papers were committing fraud, it could be that they didn’t know what they’re doing, it could be that they were just really sloppy, or it could be some combination of these, as with the Cornell pizza researcher guy who seemed to have just had a big pile of numbers in his lab and would just grab whatever numbers he needed when it was time to write a paper. It could be that none of those findings are replicable, or it could be that the errors are minor and everything replicates. Someone else will have to track all this down.

What bothers me is the way the critics have been attacked. There was that guy on twitter quoted above, and then there’s Munday, one of the original researchers, who in 2016 wrote: “It seems that Clark and Jutfelt are trying to make a career out of criticizing other people’s work. I can only assume they don’t have enough good ideas of their own to fill in their time . . . Recently, I found out they have been ‘secretly’ doing work on the behavioural effects of high CO2 on coral reef fishes, presumably because they want to be critical of some aspects of our work.”

The idea that there’s something shameful about critically assessing published work, or that it’s bad to “make a career” out of it, or that you can “only assume” that if someone is critical, that “they don’t have enough good ideas of their own to fill in their time” . . . That’s just a horrible, horrible attitude. Criticism is a valuable and often thankless part of science.

And to slam the critics for going public . . . jeez! They tried everything and were stonewalled at every turn, so, yeah, they went public. Why not? The original papers were published in public. I don’t see why the reputations of the scientists who wrote those papers should be considered more valuable than the social value of getting the research right.

This is so annoying.

I think the original researchers should’ve said something like this:

We very much appreciate the efforts of these outside critics who found serious errors in our published papers. We are carefully looking into our data processing and analysis pipeline and will share all of as soon as is possible. In the meantime, we consider all our published findings to be tentative; we will only be able to say more after a careful assessment of our data and procedures. Whatever happens, we are pleased that our studies were reexamined so carefully, and again we thank the critics for their careful work.

P.S. We appreciate that some people have been defending us on social media and that our universities have stood by us. We pride ourselves on our research integrity and we very much regret the sloppiness in our work that has led to our errors. But, please, do not defend us by attacking our critics. There was nothing improper or inappropriate in their criticism of our work! They found flaws in our published papers, and it was their scientific duty to share this information with the world. Telling us personally wouldn’t have been enough. Our papers are in the public record. Our papers did have methodological weaknesses—that is clear, as we report values that are not mathematically or physically possible—and so the authors of the critical paper should not be attacked for pointing out these errors.

22 thoughts on “The Javert paradox rears its ugly head

  1. It’s bad indeed that it is happening, but unfortunately we can never take for granted criticism of ideas should not be conflated with attacks. If encouragement helps at all, I’m glad you are doing this. One person’s Javert is another person’s Andy Dufresne.

  2. Andrew –

    It’s interesting to me that I read that article (in fact I linked it in the previous related thread – and was surprised you didn’t comment as it seemed right up your ally) but had a different take-away than you.

    Not unlike Tiago, my take-away was that there were two diametric ways to look at the situation, and that I’d have to wait until more information becomes available to fall out clearly on one side or the other.

    For example, I don’t think that the quote you exerpted is saying that “there’s something shameful about critically assessing published work” in some general sense, but it was a reaction to this particular group of people and their particular criticism. Whether this particular group merits that particular accusation is important to evaluate, but just conflating those two characterizations, imo, introduces a whole other set of problems and doesn’t particularly lead towards a satisfying resolution.

    That said, I don’t disagee that your recommended response would be a more appropriate response – and perhaps that’s the more important point.

    • Joshua:

      Sure, but when people attack whistleblowers and dissenters it’s never in general terms. Nobody says that they oppose criticism for its own sake. There’s always some reason: maybe the critics are big shots punching down at vulnerable junior scholars, maybe as in this case it’s the opposite, that critics are presumptuous careerists sniping at their betters. Or maybe the problem is that the critics are three women and four men rather than three men and four women. There’s always some pretext.

      • Andrew –

        Yah. It’s just that I’ve often seen this gamed from the other direction – in particular with climate science – where people with an agenda have pursued a basically dishonest path to make accusations of fraud and to discredit science they don’t like. And then when (climate) scientist object, they are accused of gatekeeping and being aggressive to lay cover for fending off legit criticism of bad science.

        So my feeling is that part of what’s important is to avoid fitting this kind of scenario into some kind of larger narrative, and to instead stick with the specifics in context.

        That isn’t to say I think there isn’t a legit discussion to be had about the grander narrative. But that should be approached from a “meta” perspective, IMO – with meta-level information about the larger patterns playing out. Otherwise this kind of situation gets leveraged and exploited as just another proxy for political tropes about “experts” – where expertise gets politicized in such unproductive ways. COVID being a prime example.

    • The early stages of this controversy have a lot in common with the early stages of the controversy over Oona Lonnstedt’s postdoctoral work in Sweden on microplastics and perch larvae. The important connection is that Lonnstedt’s PhD in Australia was partly supervised by Phillip Munday.

      When the Clark group first criticized Lonnstedt’s postdoc work, Lonnstedt vigorously denied the accusations, claimed that her original data had been lost when her laptop was stolen, she accused her critics of professional jealousy, and a Swedish university committee exonerated her. Lonnstedt’s PhD advisors defended her at the time, but quietly faded into the background when Lonnstedt’s work and her defense of it were shown to be highly suspect and her paper retracted. Similar criticism of Lonnstedt’s PhD work with Munday on lionfish were also raised (experiments not observed by anyone else, impossible numbers of individual fish in the behavior experiments, faked photographs of individual fish).

      Now the Clark group is directly criticizing Munday’s research and another of Munday’s trainees. Those similarities and connections may account for some of the overheated language that has been used to criticize the Munday & Dixson papers (e.g., stolen TVs), as well as some of Munday’s ludicrous protestations. Maybe the critics find it easy to extrapolate from the early similarities to later similarities, but that seems like a lazy way to respond to the available information. Better to note the criticisms and wait to see if there’s a fire under all that smoke.

      OTOH, John Bruno’s response seems equally lazy, and he got way out over his skis on this one.

      Good post here by Jeremy Fox here.

      • “The early stages of this controversy have a lot in common with the early stages of the controversy over Oona Lonnstedt’s postdoctoral work in Sweden on microplastics and perch larvae.”

        Sure we come for the detective novels and beef patties and hierarchical modeling; but we stay for the perch larvae.

        • Mike, Jrc:

          What’s striking about so many of these cases is how at first they seem like close judgment calls, and then all of a sudden they don’t. So often these disputes enter into a stage where one side is presenting data and statistical analysis and the other side either goes to ground or starts hollering about getting disrespected. We saw this with the disgraced primatologist, with pizzagate, with Weggy, with the sleep guy, and so many others.

        • Yeah those are the ones we hear about because someone is making noise about it on the internet. I mean, I get it, it is infuriating when someone is wrong on the internet and it makes people want to yell, but the internet only has so much attention, and so “ignore” is often a dominant strategy for the original authors (maybe this is what you mean by “go to ground” but it isn’t like these people hide away they just go on with their research lives as though nothing happened*):

          “we find that at least 31.2% of the citations to retracted articles happen a year after they have been retracted. And that 91.4% of the post-retraction citations are approving—note no concern with the cited article.”

          n.b.: of the several manuscripts I’ve sent people of the “you are wrong on the academic publishing internet” variety, I’ve been ignored or engaged with respectfully, but never badgered (on email or on the internet). But in either case, the original work continues to be cited, even when the replication paper showed the original was completely wrong. So I’m thinking “ignore” is really effective, at least when neither party wants to internet about it.

          *in a sense this seems right – you make a mistake and then you move forward, and you don’t really have a responsibility to, like, continually promote your own mistake. But it would be nice if there was a way to signal the field to ignore something you once wrote or to point to a newer paper that changes how people should interpret your original. Then you could play accept/move-on and Science wouldn’t suffer.

        • This raises two questions:
          1. How many articles are citing retracted work? Is it .1%, 1% or 10%?
          2. How well marked are retracted articles? How prominent is the link between the original article and the retraction/addendum?
          Has anyone looked at question 1 for, say, a specific journal and time period?

      • Jeremy Fox mentions Nick Brown’s blog. As usual Nick’s take is most interesting. Nick also provides a link to an Excel file of associated with one of Dixon’s papers. Again, the Excel file is most interesting though I certainly did not do all the work that Nick did.

  3. Great post. Hope AG’s views are becoming more widely shared/accepted across the social science world. I think there is some (slow) progress.

  4. “Although she denies making up data, ‘There hypothetically could be an error in there,’ she said, perhaps because of mistakes in transcribing the data; ‘I don’t know. I’m human.'”

    This seems weirdly defensive. Why not, “Thanks for pointing out a possible transcription error”?

    • > This seems weirdly defensive.

      Presumably she was defensive because she thought she was being attacked and accused of fraud illegitimately – and that a transcription error was being pointed to as evidence of fraud?

      I’m not defending her defensiveness, and it may well turn out to be more than just a transcription error, but I don’t think it takes a huge amount of imagination to see why it might not have been a “weird” response.

      • Good point. Didn’t think about the context enough, which upon further thought would probably make most people defensive.

      • Actually it doesn’t even strike me as that defensive. Adding some context:

        “said she did not know about the allegations. Although she denies making up data, “There hypothetically could be an error in there,” she said”

        She’s hearing about “allegations” for the first time and acknowledging that its possible that she made an error. Seems pretty normal. Not defending the person or the work: just the statement.

  5. ‘The idea that there’s something shameful about critically assessing published work, or that it’s bad to “make a career” out of it, or that you can “only assume” that if someone is critical, that “they don’t have enough good ideas of their own to fill in their time” . . . That’s just a horrible, horrible attitude.’

    I guess whether this is a “horrible, horrible attitude” or not depends on your POV. If you’re trying to advance science, then, yes, “horrible, horrible attitude.” If you’re trying to paper over your shitty – or fraudulent – work, or back up the shitty or fraudulent work of a colleague, it’s a great attitude, pretty much the best tool in your tool box.

  6. Twitter is designed to create controversy, better to ignore it altogether and reread BDA or RAOS instead. Btw, shouldn’t RAOS be added as a link from the “books” tab on this blog?

  7. The thread that the tweet from John Bruno is in is interesting, there seems to be a bitter divide among some fish researchers on this.

    Contrary to John Bruno’s claim, it’s not a bro-pocket (as you point out), it seems more like a group of young researchers has found that the older researchers were a bit too chummy and weren’t sceptical enough of each others (or their students) research.

    My favourite line refers to the researchers whose work hasn’t replicated as “great scientists”. It reminds me of one of the scenes from Moneyball.

    “They’re good scientists”

    “If they’re good scientists then why isn’t their science good?”

Comments are closed.