Should we spend so much time talking about cheaters and fraudsters?

Palko points us to this article, “Why Are Gamers So Much Better Than Scientists at Catching Fraud?”, by psychology researcher Stuart Ritchie. It’s subtitled, “A pair of recent cheating scandals—one in the ‘speedrunning’ community of gamers, and one in medical research—call attention to an alarming contrast.”

Hey—I remember the speedrunning story!

Ritchie’s article is interesting. I’d change “catching” to “doing something about” in the headline—scientists are actually very good at catching fraud, including in the example that Ritchie gives in his article! The problem is not that scientists can’t or don’t catch fraud; it’s that when scientists catch fraud, other scientists tend to ignore the whole thing.

Science is kind of like . . . someone poops on the carpet when nobody’s looking, some other people smell the poop and point out the problem, but the owners of the carpet insists that nothing has happened at all and refuses to allow anyone to come and clean up the mess. Sometimes they start shouting at the people who smelled the poop and call them terrorists. Meanwhile, other scientists carefully walk around that portion of the carpet: they smell something but they don’t want to look at it too closely.

A lot of business and politics is like this too. The difference is, we expect this sort of thing to happen in business and politics: rulebreakers keep doing it over and over again. Science is supposed to be different.

Why did I write this post?

The above carpeting analogy, charming as you may find it, is not the point of this post. Rather, what I really want to talk about is when we talk about fraud.

It starts with Ritchie’s article, which, again, I liked. There was just one thing, though (other than the title): He focuses on scientists cheating, but I think the much bigger problem is scientists who are not trying to cheat but are just using bad methods with noisy data. Indeed, the focus on cheaters can let incompetent but sincere scientists off the hook. Recall our discussion from a few years ago, The flashy crooks get the headlines, but the bigger problem is everyday routine bad science done by non-crooks.

And, yes, I’m part of the problem: cheaters are fun to write about, also when scientists cheat, the scandal is that much worse when nobody seems to care about their errors.

So, should we spend so much time talking about cheaters and fraudsters?

Pro: These are the most extreme cases. If the scientific community often can’t even get its act together to disown and condemn cheating and fraud, this tells us there’s a major problem.

Con: Most bad misconduct is not intentional cheating. By focusing on cheating, we can give the misleading impression that, if you’re a scientist and you don’t cheat, you’re ok. Actually, though, honesty and transparency are not enough.

So I see arguments on both sides of this one.

11 thoughts on “Should we spend so much time talking about cheaters and fraudsters?

  1. An idle thought, but I wonder if the resistance to correcting errors in the social sciences is partly due to people thinking about manuscripts more like performances or pieces of fiction instead of facts. Like, a movie or fictional book can be truly awful and misinformed, but it’d be hard to imagine many situations where I’d want to retract a piece of fiction from the public record.

    It feels like some people think about the whole process as storytelling, and the statistics like “special effects” that just sort of support the story.

    I’m not sure this analogy holds perfectly, but there’s definitely a common opinion it seems that most scientific products should remain the same post publication with no edits, even if they’re wrong.

    Except textbooks, I guess, but good margins on reselling those new editions…

    • Sean:

      Sometimes I think of a sports analogy. A win is a win, even if if turns out that the referee made a mistake during the game. And it does seem that for many researchers, the main goal of research is to get “wins.”

      • Yeah, that is a good analogy. Better, really.

        When I was first learning the ropes of publishing in grad school, I remember being struck by how much academic publishing was sort of like a game. A really complicated game with extremely esoteric rules, but a game nonetheless that could be won with sufficient effort.

        • Yes, it is entirely a game. But it’s like we collectively treat the winners of the Super Bowl as though that makes them an expert on everything from nutrition to strength training to the best way to treat cancer.

          Any large-scale system that determines the livelihoods of those involved will inevitably evolve into a game with esoteric rules, rewarding those who are the best players. Those winners will then go on to control the rules of the game for later players.

  2. The problem with your framing is that science publication isn’t so much a rug as a conveyor belt. The vast majority of published research is frankly inconsequential and will be out of sight and mind in a matter of months, so the incentive to ‘rock the boat’ by addressing bad methodology is limited – there’s plenty of research that’s bad not on the merits of its statistics but because the publish-or-perish rat race has led to such a low bar for ‘publishable’ research. Going to the effort to address the failings of papers that wouldn’t matter in the first place just isn’t worth the effort in most cases.

    This is, I think, where the Econ model offers an alternative to other fields’ ‘minimum viable paper’ model – an econ paper can take years to write (pre-grad-school I worked as an RA on an AER paper that was 5+ years in the making) and serves as a landmark for future research, whether good or bad. This a) makes it harder for outright nonsense to slip through, as the review process is more substantive (though obviously only as good as the reviewers), and b) provides a cleaner battleground for methodological criticism, because the ‘canon’ of meaningful research is much smaller and clearer. See, for example, the infamous study linking abortion and crime, which has become something of a dead horse for econometricians to prove their worth by beating up.

    • That model is flawed in a different way. If it takes years to get a good paper published then there is very little chance it will have a significant effect on the world outside of academia. There will be a few exceptions, but if you are trying to have an impact on something like current inflation policy, then I guess the profession will take a pass and let somebody else give policy advice.

  3. Try to see it differently: if the scientific community cannot get its act together to help improve unintentional ’bad science’, how does it think to debunk fraud? It’s often in the inability to effect small change where we find the incompetence for major improvements.

  4. I’m not sure that scientists are unique.

    Medical doctors look the other way (walk around the poop) even when they know their colleagues are hacks (I’ve done medical research in a hospital setting for the last 25 years).

    Lawyers do the same (I’m the child, grandchild, and three-time nephew of laywers).

    I just think humans are generally unwilling to confront poopers and then deal with all the fallout (including being called Stasi, which is really the least of the bad things that can happen).

    I guess the issue is that science and scientists often portray themselves as somehow more pure and above that sort of “human” behavior.

    • Michael said,

      “Lawyers do the same (I’m the child, grandchild, and three-time nephew of laywers). ”

      Hmm — I’m the grandchild and two-time niece of lawyers. Maybe it’s growing up with skepticism for the lawyer relatives (and/or the lack of respect for them by relatives in the generations before mine) that makes me so inclined to question “authority”.

  5. If every blog on statistics and social sciences were to devote as many resources to “talking about cheaters and fraudsters” as this blog does, it would be too much. Given how frequent it is that people walk around the problem, this blog’s allocation of resources seems fine to me.

    Bob76
    PS. I’m actively walking around such a problem right now. I know of a book from a major university press (New Haven, Cambridge, etc.) that has some deep errors in it. I haven’t done anything about it. I am working on a paper that attacks a central thesis of the book. After many rejections, I got it past peer review for the major conference in this field. We will see if the author of the book confronts me at the conference.

    But I have not written a book review that points out these errors.

  6. “The Fundamental Publication Error: Correctives do not Necessarily Produce Correction

    The fundamental publication error refers to the belief that just because some corrective to some scientific error has been published, that there has been scientific self-correction (Jussim, 2017a). A failure to self-correct can occur, even if a corrective has been published, by ignoring the correction, especially in outlets that are intended to reflect the canon. With most of the examples presented here, not only are the original claims maintained by violation of fundamental norms of scientific evidence, but ample corrections have been published. Nonetheless, the erroneous claims persist.”

    Source: Jussim et al (2019). Scientific gullibility. In J. Forgas & R. Baumeister (Eds.), The social psychology of gullibility: Fake news, conspiracy theories and irrational beliefs (The Sydney Symposium on Social Psychology), pp. 279-303. New York: Routledge.
    Full article can be found on my pubs page.

Leave a Reply

Your email address will not be published. Required fields are marked *