Cheating in sports vs. cheating in journalism vs. cheating in science

Sports cheating has been in the news lately. Nothing about the Astros, but the chess-cheating scandal that people keep talking about—or, at least, people keep sending me emails asking me to blog about it—and the cheating scandals in poker and fishing. All of this, though, is nothing compared to the juiced elephant in the room: the drug-assisted home run totals of 1998-2001, which kept coming up during the past few months as Aaron Judge approached and then eventually reached the record-breaking total of 62 home runs during a season.

On this blog we haven’t talked much about cheating in sports (there was this post, though, and also something a few years back about one of those runners who wasn’t really finishing the races), but we’ve occasionally talked about cheating in journalism (for example here, here, here, and here—hey, those last two are about cheating in journalism about chess!), and we’ve talked lots and lots about cheating in science.

So this got me thinking: What are the differences between cheating in sports, journalism, and science?

1. The biggest difference that I see is that in sports, when you cheat, you’re actually doing what you claim to do, you’re just doing it using an unauthorized method. With cheating in journalism and science, typically you’re not doing what you claimed.

Let me put it this way: Barry Bonds may have juiced, but he really did hit 7 zillion home runs. Lance Armstrong doped, but he really did pedal around France faster than anyone else. Jose Altuve really did hit the ball out of the park. Stockfish-aided or not, that dude really did checkmate the other dude’s king. Etc. The only cases I can think of, where the cheaters didn’t actually do what they claimed to do, are the Minecraft guy, Rosie Ruiz, and those guys who did a “Mark Twain” on their fish. Usually, what sports cheaters do is use unapproved methods to achieve real ends.

But when journalism cheaters cheat, the usual way they do it is by making stuff up. That is, they put things in the newspaper that didn’t really happen. The problem with Stephen Glass or Janet Cooke or Jonah Lehrer is not that they achieved drug-enhanced scoops or even that they broke some laws in order to break some stories. No, the problem was that they reported things that weren’t true. I’m not saying that journalism cheats are worse than sports cheats, just that it’s a different thing. Sometimes cheating writers cheat by copying others’ work without attribution, and that alone doesn’t necessarily lead to falsehoods getting published, but it often does, which makes sense: once you start copying without attribution, it becomes harder for readers to track down your sources and find your errors, which in turn makes it easier to be sloppy and reduces the incentives for accuracy.

When scientists cheat, sometimes it’s by just making things up, or presenting claims with no empirical support—for example, there’s no evidence that the Irrationality guy ever had that custom-made shredder, or that the Pizzagate guy ever really ran a “masterpiece” of an experiment with a bottomless soup bowl or had people lift an 80-pound rock, or that Mary Rosh ever did that survey. Other times they just say things that aren’t true, for example describing a 3-day study as “long-term”. In that latter case you might say that the scientist in question is just an idiot, not a cheater—but, ultimately, I do think it’s a form of cheating to publish a scientific paper with a title that doesn’t describe its contents.

But I think the main why scientists cheat is by being loose enough with their reasoning that they can make strong claims that aren’t supported by the data. Is this “cheating,” exactly? I’m not sure. Take something like that ESP guy or the beauty-and-sex-ratio guy who manage to find statistical methods that give them the answers they want. At some level, the boundary between incompetence and cheating doesn’t really matter; recall Clarke’s Law.

The real point here, though, is that, whatever you want to call it, the problem with bad science is that it comes up with false or unsupported claims. To put it another way: it’s not that Mark Hauser or whoever is taking some drugs that allow him to make a discovery that nobody else could make; the problem is that he’s claiming something’s a discovery but it isn’t. To put it yet another way: there is no perpetual motion machine.

The scientific analogy to sports cheating would be something like . . . Scientist B breaks into Scientist A’s lab, steals his compounds, and uses them to make a big discovery. Or, Scientist X cuts corners by using some forbidden technique, for example violating some rule regarding safe disposal of chemical waste, and this allows him to work faster and make some discovery. But I don’t get a sense that this happens much, or at least I don’t really hear about it. There was the Robert Gallo story, but even there the outcome was not a new discovery, it was just a matter of credit.

And the journalistic analogy to sports cheating would be something like that hacked phone scandal in Britain a few years back . . . OK, I guess that does happen sometimes. But my guess is that the kinds of journalists who’d hack phones are also the kind of journalists who’d make stuff up or suppress key parts of a story or otherwise manipulate evidence in a way to mislead. In which case, again, they can end up publishing something that didn’t happen, or polluting the scientific and popular literature.

2. Another difference is that sports have a more clearly-defined goal than journalism or science. An extreme example is bicycle racing: if the top cyclists are doping and you want to compete on their level, then you have to dope also; there’s simply no other option. But in journalism, no matter how successful Mike Barnicle was, other journalists didn’t have to fabricate to keep up with him. There are enough true stories to report, that honest journalists can compete. Yes, restricting yourself to the truth can put you at a disadvantage, but it doesn’t crowd you out entirely. Similarly, if you’re a social scientist who’s not willing to fabricate surveys or report hyped-up conclusions based on forking paths, yes, your job is a bit harder, but you can still survive in the publication jungle. There are enough paths to success that cheating is not a necessity, even if it’s a viable option.

3. The main similarity I see among misbehavior in sports, journalism, and science is that the boundary between cheating and legitimate behavior is blurry. When “everybody does it,” is it cheating? With science there’s also the unclear distinction between cheating and simple incompetence—with the twist that incompetence at scientific reasoning could represent a sort of super-competence at scientific self-promotion. Only a fool would say that the replication rate in psychology is “statistically indistinguishable from 100%“—but being that sort of fool can be a step toward success in our Ted/Freakonomics/NPR media environment. You’d think that professional athletes would be more aware of what drugs they put in their bodies than scientists would be aware of what numbers they put into their t-tests, but sports figures have sometimes claimed that they took banned drugs without their knowledge. The point is that a lot is happening at once, and there are people who will do what it takes to win.

4. Finally, it can be entertaining to talk about cheating in science, but as I’ve said before, I think the much bigger problem is scientists who are not trying to cheat but are just using bad methods with noisy data. Indeed, the focus on cheaters can let incompetent but sincere scientists off the hook. Recall our discussion from a few years ago, The flashy crooks get the headlines, but the bigger problem is everyday routine bad science done by non-crooks. The flashy crooks get the headlines, but the bigger problem is everyday routine bad science done by non-crooks. Similarly, with journalism, I’d say the bigger problem is not the fabricators so much as the everyday corruption of slanted journalism, and public relations presented in journalistic form. To me, the biggest concern with journalistic cheating is not so much the cases of fraud as much as when the establishment closes ranks to defend the fraudster, just as in academia there’s no real mechanism to do anything about bad science.

Cheating in sports feels different, maybe in part because a sport is defined by its rules in a way that we would not say about journalism or science.

P.S. After posting the above, I got to thinking about cheating in business, politics, and war, which seem to me to have a different flavor than cheating in sports, journalism, or science. I have personal experience in sports, journalism, and science, but little to no experience in business, politics, and war. So I’m just speculating, but here goes:
To me, what’s characteristic about cheating in business, politics, and war is that some flexible line is pushed to the breaking point. For example, losing candidates will often try to sow doubt about the legitimacy of an election, but they rarely take it to the next level and get on the phone with election officials and demand they add votes to their total. Similarly with business cheating such as creative accounting, dumping of waste, etc.: it’s standard practice to work at the edge of what’s acceptable, but cheaters such as the Theranos gang go beyond hype to flat-out lying. Same thing for war crimes: there’s no sharp line, and cheating or violation arises when armies go far beyond what is currently considered standard behavior. This all seems different than cheating in sports, journalism, or science, all of which are more clearly defined relative to objective truth.

I think there’s more to be said on all this.

30 thoughts on “Cheating in sports vs. cheating in journalism vs. cheating in science

  1. To #1, I think the main sports example of cheating by not doing what you claim you did occurs in endurance sports (marathons, long-distance cycling events, etc) although admittedly this is usually an issue with amateurs cheating, not professionals.

    • Anon:

      Yes, I do mention that (“The only cases I can think of, where the cheaters didn’t actually do what they claimed to do, are the Minecraft guy, Rosie Ruiz, and those guys who did a “Mark Twain” on their fish.”); I just don’t think this describes most of the sports cheating that people are concerned about. The usual problem is not people saying they did something they didn’t; it’s people achieving some aim using a banned technique.

  2. Or, Scientist X cuts corners by using some forbidden technique, for example violating some rule regarding safe disposal of chemical waste, and this allows him to work faster and make some discovery. But I don’t get a sense that this happens much, or at least I don’t really hear about it.

    On the contrary, this describes probably 99.99% of what gets published. It is so much easier to ignore confounds, throw out any inconvenient data (there is always a legitimate reason available), test a default null hypothesis, then commit any of a number of logical fallacies to (seem to) conclude something of interest.

    People trying to figure out what it takes to get reproducible results, and those deriving actual theories to be tested cannot compete with this counterfeit “science”. It is like Gresham’s law, the bad science drives out the good.

        • Six years downstream, I have the following update to give (trying to keep it brief):

          – it is absolutely possible to avoid any of the usual bad practices and still have a career and a good publication record, but one has to work harder and be persistent and patient (many more rejections)

          – the main reason people don’t go down this route is that they don’t know what the issue is; this comes from a broken education system and poorly trained (in data interpretation) advisors and editors. Once people become senior they can’t turn around and say, wait a minute, everything I have been doing so far is wrong; this leads to entrenchment effects

          – one price to be paid for telling the story in a paper without embellishment and creative wording is that one sometimes is unable to publish in prestige journals; I am comfortable with that but it can cost early career postdocs cushy jobs, they may have to settle for less prestigious universities

          – the adversarial (towards one’s own ideas) approach that Andrew suggested in his blog post is very important and really works well, but it does mean that most of one’s theories are going to come out wrong; I don’t know many people (actually, I can’t think of a single person in my field) who are willing to give up on one of their own scientific proposals in the face of counterevidence (side conditions are created that allow the theory to live)

          – however, one does have to be publishing steadily (say one paper a year on average for an early career postdoc without a big lab or huge resources) to stand any chance of getting a tenured job—that reality is impossible to change

  3. I don’t buy it. Your examples of sports cheating are selective – there are cases of athletes’ gambling, throwing games, etc. In science cheating, there are plenty of examples of poor research leading to selection biases or poor measurements designed (consciously or not) to get desired results – they really did get this data (rather than making it up), but it is cheating. So, I’m not sure I buy that there are qualitative differences in cheating across these areas. More importantly, I think it is interesting to look at the incentives and abilities to cheat in these areas. I think it is harder to cheat in sports – detection is easier (although I think this is only a matter of degree). The incentives differences are truly unfortunate – the stakes in science are so much lower than in sports (not really, as some of the recent scandals involving athletes seem pitiful – former professional football players involved in schemes resulting in hundreds of thousands of illegal dollars – a drop in the bucket compared with their football earnings).

    • Dale:

      Maybe you’re right on your first point. I guess the right way to think about this systematically is to start by considering different sorts of cheating and then thinking about how this works out in different areas of endeavor such as sports, journalism, science, etc.

      Regarding your last point: My guess is that pro athletes have been allowed to do whatever they want for so long, that they just haven’t internalized the idea that the rules apply to them too. They don’t do the cost-benefit calculation—is a few hundred thousand dollars worth the risk of getting caught—because they just don’t have experience getting caught. Consider that recent Brett Favre thing: Brett is just so used to asking people for favors and having them say yes, that that’s how he goes about his life.

  4. One dif between “openly competitive” activities like sports and business – journalism is a business also – and “not-directly-competitive” activities like science and in some ways journalism: the openly competitive activities have cheating detection systems and established penalty systems.

    For politics the situation is messy. There are different types of cheating, ranging from misrepresenting the truth to stuffing the ballot box, which have different penalty systems. The former is illegal and a subject of law enforcement. News reporting is the main cheating detection system for political truth, but there are no clear rules about what constitutes a mistruth except on the extremes (Trumpian lying); and it’s a highly informal system that is selectively enforced.

    Science has virtually no formal cheating detection, except a kind of news reporting performed by other scientists. AFAIK, there are formal penalties, but they only come in to play at the extremes. There doesn’t seem to be a “misdemeanor” system of any kind, formal or informal.

    • There are many examples of “formal” systems: drug testing in sports, defamation in journalism (in some countries), self-policing in some professions (medicine, law, etc.), drug testing, and so on. There are many informal systems in science: blogs, journals (retractions), published critiques, etc. and in business and politics the ultimate protection against cheating is customer/voter reactions. I think we all view these informal systems as highly imperfect – but so are the formal ones. The formal ones may even be worse, since they often become corrupted themselves (since so much is at stake). The only real protection is a more educated public, where cheating ends up being self-defeating. Maybe chipmunk will want to weigh in on how cheating would be stupid for any business due to competition. If only that were true. The competition is often not sufficient to make cheating (I’m including lying, deception, and treating customers as stupid – admittedly a stretch from “cheating”) unprofitable. And detection is easier than ever to conduct, but harder than ever to prove.

      Would you favor creation of an official “statistical integrity institution?” I find the idea attractive until I think about who would staff it, how it would conduct investigations, and what penalties it could impose.

      • Science already has a cheating detection system. Someone predicts something surprising, then if it happens, you can be pretty sure no cheating was involved.

        Same with multiple competing groups all over the world doing x, y, z and observing pretty much the same outcome.

        The problem is researchers have stopped using science, instead thet are testing strawman hypotheses while claiming direct replication is inefficient. Given the prevailing norms, a “statistical integrity institution” will only further entrench bad practices.

        • Yup, like when that Google guy claimed that his chatbot could do all these amazing things, but then he provided no code or transcript, it makes us doubt that he was accurately describing what happened. Or when those social science researchers claimed to use data from surveys they’d collected but then they had no documentation about the claimed surveys.

      • Dale said, “There are many examples of “formal” systems: drug testing in sports, defamation in journalism (in some countries), self-policing in some professions (medicine, law, etc.), drug testing, and so on.”

        Self-policing in medicine is one that particularly irks me — I’ve had too many experiences with physicians who do ridiculous things — things like prescribing a medication for ulcers, when the problem is a pulled rectus abdominis; or trying to remove a growth in the nose by freezing it off with liquid nitrogen.

  5. What you’re getting at is the nature of the “rules of the game” when you talk about boundaries, goals, authorized/unauthorized, etc.
    With sports, we have hard rules: the rules specifically say what is banned, and so when someone breaks the rule, it’s relatively easy to “convict”. If someone comes up with new cheating innovations, the community gets together to modify the rules. But we’re still using hard rules. (There are still some nuances, e.g. extreme blood measurements may be claimed to be genetic rather than doping, but comparatively, the wiggle room is small.)
    In journalism, science, politics, war, most “rules” are soft. It’s often a judgment call – especially when it comes to statistics – whether someone has “cheated”. To be sure, there are a few hard rules as well – e.g. making up data, fabricating stories, but most rules are not hard.
    Business is a mix of hard and soft rules. There are accounting rules and laws that ban certain practices, but enforcement (at least in the U.S.) is often lax. Soft rules don’t seem to stand the test of time, e.g. the social contract between employers and employees, but employers have found no resistance to being disloyal to employees, and eventually employees treat employers the same way

  6. Your third point prompts an interesting discussion on where we draw the line for cheating.

    “With science there’s also the unclear distinction between cheating and simple incompetence”

    I believe this could apply to journalism as well. When is a journalist cheating vs not doing their due diligence? If a publication puts out a biased article, do you argue it’s simply that? A biased piece of work or is it cheating? If it is cheating, is that something to just note as a reader or is there a reason for punishment?

  7. “the drug-assisted home run totals of 1998-2001”

    Was using PEDs against the rules at that point? Illegal, perhaps, but that’s not within MLB’s area to prosecute (as we would of course realize if the issue were a DUI conviction. “Player X drove drunk! Put an asterisk against his name” Yes, yes, DUIs don’t help with performance, but the point remains).

    • “Was using PEDs against the rules at that point?”

      Yes, according to ESPN’s article on “The Steroids Era,” steroids were banned from baseball in 1991. (There wasn’t systematic testing until later, but they were banned during “the drug-assisted home run totals of 1998-2001.”)

  8. I think with science to use a cheating metaphor, we have to think about the “rules” of the game.

    If the rules of the game are “results that are positive, novel, newsworthy are wins” and we believe the corollary that wins should come with rewards like fame and money, then stuff like phacking, fabrication, and other stuff is like cheating like in sports. People are using an unauthorized method to achieve that outcome (e.g., we expect real data not a simulation).

    But if the rules of the game are “We need to find as close approximations of truth as we can muster” then it quickly becomes apparent that there is no easy way to adjudicate who “wins.” The rules are too esoteric and undefined to determine winners! I guess in this framework, sloppy methods and stats are technically playing the game poorly (not cheating, more like never hitting the ball in baseball) except most people can’t tell if you’re bad or good!

  9. What’s funny about baseball is that they apparently have no problem with performance-enhancing surgery, like Tommy John surgery to extend ligaments or LASIK surgery to improve eyesight.

    Performance enhancing drugs like speed (amphetamines, aka “greenies”) were widely reported in baseball in the 1950s and 1960s. Nowadays (or at least 10 years ago), players get therapeutic use exceptions for speed in the from of Ritalin or Adderall (by getting an ADHD diagnosis), apparently getting diagnosed at twice the rate of the population as a whole. I think coffee and Red Bull and tea are still legal stimulants.

    Diuretics are illegal in baseball, which is too bad if you have high blood pressure. And speaking of blood pressure meds, can you take beta blockers in baseball to calm the shakes? I’m not sure—they’re apparently legal in some sports, but not others (like golf).

    • I’ve always had trouble and mixed feelings about drug issues in sports. I’m tempted to think we should let athletes do anything they want – I can’t see a good way to define what is an “artificial” aid and what is “natural.” To cite just one extreme example: does having better access to quality facilities constitute a natural or artificial edge? I would advocate for protecting participants, particularly young ones, from dangerous substances/practices, but I tend to think we should drop all restrictions.

      But then we are left with sports looking much like science. There may be guidelines and generally accepted practices, but mostly it is a matter of what you can get away with. Such anarchy has its costs, but I often wonder if the “solutions” end up being even more costly.

      The best counterargument I can think of is that we need rules. But the rules of the game seem different to me than the rules of participation in the game. In sports that distinction seem fairly clear. In science, we don’t have rules on participation (though it is tempting to want some) nor do we have rules of the game (at least until it reaches a legal setting or editorial decisions regarding publications and/or grants). So, perhaps we are missing rules of the game in science akin to the rules that apply in sports (e.g., penalties, fouls, etc.). But if these are to be helpful, then the question is who makes up and enforces these rules? It is even a problem in sports – e.g., Olympics judging, referee decisions, etc.). I can’t imagine it working well in science.

      • +1, the last few comments are converging on the notion of “rules” and how in science, most rules are not nailed down. In sports, there are clear governing bodies with teeth. When FINA decided to ban full-body high-tech suits, they are banned. I don’t know how we can begin to define strict rules for proper statistical analyses, let alone enforce such rules. In fact, when someone tries, they often attract critics e.g. p=0.05, n>30

      • “I would advocate for protecting participants, particularly young ones, from dangerous substances/practices, but I tend to think we should drop all restrictions.”

        These two views are opposing. Just look at pro cycling when EPO hit in the late 80s and early 90s. Riders were doped to the gills with dangerously high hematocrit levels, which eventually forced the UCI to impose a 50% limit simply for rider safety. There were numerous stories of riders sleeping with HR monitors with alarms to wake them if their HR dropped below 30bpm. I think several Dutch athletes died. Dropping all restrictions by necessity leads to a science experiment with the main end goal as performance in lieu of safety. Further, people respond extremely differently to PEDs, so it no longer becomes a contest of athletic talent, but one of drug response ‘talent’ and access to unscrupulous physicians who push the limit for extra $$$. By enforcing some rules, even cheaters have a hard time with excesses and are generally forced to micro-dosing, which at the very least, helps enforce some safer level for young people who would have a hard time relinquishing years of hard work, sacrifice, and dreaming to some notion of safety if there be no restrictions on anyone.

        • I’m sympathetic to your view, but unconvinced. Despite the rules in place, athletes use training regimens that endanger their health. Some sports (football, for example) are innately dangerous, particularly with the size and strength of professional players. We can and should inform people of the risks they partake in, but at what point should we let them make their own decisions. We tolerate people endangering themselves in most of their activities – why treat sports differently? The “unfairness” of the competition strikes me as a weak argument, particularly when there are so many dimensions of unfairness that we allow.

          I do think of children’s participation differently, in the same way that we regulate many activities for children (drinking, driving, etc.). So, in those cases we may agree. I also may agree that sports competition could become a contest of drug use and physical abuse if we were to drop such restrictions on professional athletes. But I think that is a symptom of the problems, not the problem itself. It is only a problem for the fans – and I have little sympathy for the fans in this case. The problem for the athletes is a product of the intense competition and huge rewards (incentives) that exist. Much like exist for statistical abuses (though the rewards probably don’t match those of pro athletes unless the Wansinks and Arielys of the world do even better than I think).

        • “Despite the rules in place, athletes use training regimens that endanger their health.”

          I’m not sure what these regimens are….at least not at the level of health endangerment that I mention abuse of PEDs can provide.

          “Some sports (football, for example) are innately dangerous, particularly with the size and strength of professional players. We can and should inform people of the risks they partake in, but at what point should we let them make their own decisions. We tolerate people endangering themselves in most of their activities – why treat sports differently?”

          So we should just let the risk have no limit? There’s no ceiling? There are rules in the NFL about tackling and hits. There are rules in the pro-cycling regarding course design and bicycle safety. There are limits that define boundaries of risk, not just for safety, but also so that we continue to watch football instead of some weird UFC football blend, or we continue to watch pro-cycling instead of carnage on the side of some dirt road descent in the Pyrenees.
          I’m a big believer in allowing people to take risks. I raced bicycles at a reasonably high level, and I ride dirt bikes and rock climb. But there is always some line that has to be drawn, and your remark that you both agree with protecting participants and dropping all restrictions seems diametrically opposed to me.

        • jd: in many professional sports, its expected that athletes will leave with life-altering injuries, due to a Red Queen’s Race. As far back as the 1970s, Stan Rogers sang this about hockey players:

          I tell them to think of the play and not of the fame
          If they’ve got any future at all, it’s not in the game
          ‘Cause they’ll be crippled and starting all over again
          Selling on commission and remembering
          When they were flying, remembering dying

          The details vary eg. brain damage in boxing and American football, knee damage in some running sports. Here is one random link with cw: bodybuilding culture https://www.noobgains.com/college-athlete-injury-statistics/

        • Sean – as a former athlete in a risky sport at a reasonably high level, and one that had a life altering injury, I’m quite aware of this. I don’t see how this relates to the inconsistency that I see Dale’s comment about PEDs. Yeah, sports have expected risks to keep up. It doesn’t follow that rules regarding safety should be thrown out the window.

      • I agree. It is my assumption that the drug rules aren’t to protect the players, but to stop admiring youth from emulation. Since the long-term use of these drugs is clearly harmful, and since only a tiny fraction of players will ever financially benefit, I take the drug rules to be a socially maximizing cost-benefit analysis of sorts. Note that LASIK and Tommy John surgery and superior training facilities lack the downside for players who will never sign a professional sports contract.

  10. > This all seems different than cheating in sports, journalism, or science, all of which are more clearly defined relative to objective truth.

    I wonder about that. With sports, at least, the “objective truth” boundary can certainly be less than clearly objective. In addition to the example Bob points to above, the issue of whether transgender athletes are “cheating” seems to me to be less than objectively defined. When I look at scientific analyses, say, of whether ivermectin is a useful treatment for COVID I see much disageement over what is or isn’t effectively “cheating” to argue one position or another. In journalism, the definition of what is editorializing and what is straight reporting seems align with a subjective assessment of “cheating”

    I think it’s pretty hard to find any domain where an assessment of what is or isn’t “cheating” isn’t oftentimes influenced by (subjective) biases such as confirmation bias, motivated reasoning, the fundamental attribution error, etc.

  11. One of the biggest issues in cheating is the size of the advantage relative to the variance of skill in the competition. I could cheat at golf 12 different ways and I’d still be nowhere near able to compete on the PGA Tour. But the variance of skill among PGA professionals is so low among the top 20 players that even a tiny llegitimate advantage maginifies tremendously in terms of earnings.

Leave a Reply

Your email address will not be published. Required fields are marked *