Skip to content

Marc Hauser: Victim of statistics?

I have no idea; this is just a theory.

In the past, when disgraced primatologist Marc Hauser has come up in this space, it’s been because he “fabricated data, manipulated experimental results, and published falsified findings” (in the words of the Department of Health and Human Services, as quoted by wikipedia), juxtaposed with the whole “Evilicious” thing.

My take on Hauser as a scientist has been that, if his work has value, it is in its theoretical contributions. It’s clear that Hauser’s ideas of quantitative research are all screwed up (that’s how you get behavior like this: “The committee painstakingly reconstructed the process of data analysis and determined that Hauser had changed values, causing the result to be statistically significant, an important criterion showing that findings are probably not due to chance.”) but he might be a wonderful qualitative researcher. Perhaps he constructed good theories based on his careful observations of monkey behavior.

But what about Hauser’s own behavior? What went wrong there? A few months ago I conjectured that Hauser was a victim of the “great man” theory of science. The great man theory, Harvard snobbery, and generic sexism combined when he analogized boring, data-crunching scientists to “schoolmarms.” On one hand, this is horrible, that someone with these sort of attitudes and behaviors had power and influence in a major educational institution. At the same time, it’s kinda sad that he was trapped in his macho, Edge Foundation ideology. If Hauser really was talented at qualitative observation and theorizing, it’s a pity that he couldn’t contextualizing his strengths and weaknesses, rather than first disparaging quantitative researchers as “schoolmarms” and then turning around and faking his data. Qualitative theories were not enough; he had to rig his data too. The Great Man can do it all, right?

OK, that’s all well and good, but a recent exchange in comments led me to another thought. Here’s what I wrote:

Maybe Hauser had excellent qualitative understanding and was able to come up with excellent theories, and maybe it was just his statistical naivety that led him to expect every experiment to turn out just as predicted, which in turn motivated cheating.

We’ve talked about this before, that people want their theory to be something it can’t be; they want it to be a universal explanation that works in every example.

This is the sense that Hauser was a victim of statistics. More precisely, he was a victim of the attitude that, if a theory is correct, it should work in every example, what Tversky and Kahneman called the fallacy of the law of small numbers. (We discussed an extreme example of this fallacy a few years ago.) Hauser was a victim of statistics in the way that Evel Knievel was a victim of gravity.

What happens if (a) based on your qualitative understanding of the world, you feel that your theory is true, (b) you have a naive belief in the so-called law of small numbers, and (c) your data don’t support your theory (in the conventional way of providing “statistically significant” evidence? It’s natural, then, to move to (d) adjust your data to the higher truth, and then (e) lie about it. OK, most researchers don’t go to steps d and e, as they violate various norms of science—but you can see how such steps can seem to make sense.

Also, there are lots of incentives to not be honest about your data. If you’re honest and say something like, “We have this great theory, it makes qualitative sense, but our hard data show no statistical significance,” then I think it’s a lot, lot harder to get published in Science, Nature, PNAS, Psychological Science, or even a lesser-ranking field journal.

Marc Hauser: victim of an unrealistic expectation that, if a theory has value, every experiment (or nearly every experiment) should confirm it. The problem for him was that 80% power was not just a slogan, a way to get grants. He really believed it.

But I have no idea; this is just a theory.

P.S. Why write about a former Ted talk / Harvard professor whose theories have now been forgotten? It’s the usual story. This particular Edge foundation ubermensch may have left the scene, but I suspect the general modes of thinking are as much of a problem today as they were in 1971 when Tversky and Kahneman published that paper. One reason for focusing on extreme cases is that they are good stories; another reason is that they give a clue about how strong these cognitive biases can be. If belief in the law of small numbers is so strong that it can destroy an illustrious career . . . that’s a big deal.


  1. Ben says:

    The Wikipedia article starts:

    “This article is about the evolutionary biologist. For the skydiver, see Marc Hauser (skydiver).”

    There seems to be some sort of professional analogy there.

    • Andrew says:


      I looked up the Marc Hauser (skydiver). He’s also a motivational speaker! He has more in common with Marc Hauser (retired psychology professor) than one might expect. Perhaps they can do a Ted talk or Edge foundation symposium together.

      • Ben says:

        > He’s also a motivational speaker!

        It’s Marc Hausers all the way down.

        • Andrew says:


          It was the “schoolmarms” thing that really set me off. I hate that macho posturing. What next, a claim that he’s really good at opening jars?

          • gec says:

            Another stupid thing about Hauser’s “schoolmarm” remark is that usually the term is used to connote someone who is mindlessly obsessed with the application of rules, but mindless application of (statistical) rules was exactly the approach Hauser exploited!

            • Andrew says:


              Not quite. One of the mindless rules is that you’re supposed to code your data using strict protocols, which Hauser notoriously failed to do. (As I understand it, the key step that got him in in trouble was that he refused to share the videotapes of his monkeys. As a result, his coding couldn’t be trusted.)

              • gec says:

                True, forgot about that part!

                Still funny that critics of bad science can be ridiculed as both schoolmarms AND terrorists. I wonder if there’s any other domain where those two can be found on the same continuum?

              • Andrew says:


                Think about it . . . we’re called “thugs” and “second-stringers,” we’re called “terrorists,” but we’re also called “schoolmarms” and “replication police” and “Stasi.”

                From the wikipedia entry:

                The Stasi also maintained contacts, and occasionally cooperated, with Western terrorists.

                What do “schoolmarms” and “terrorists” have in common? They both threaten “our way of life,” that is, the right of Harvard professors to say whatever they want, get fat book contracts and adoring NPR profiles, without ever having their ideas questioned. Schoolmarms get in the way of “our way of life” by insisting that the Hausers of the world follow the same rules as everyone else; terrorists get in the way of “our way of life” by scaring people into no longer cooperating with the Hausers of the world.

                To say it another way: Schoolmarms do their thing by trying to bind Hauser etc. to existing rules; terrorists do their thing by reducing the effectiveness of systems that keep the Hausers insulated from the consequences of their actions. From that perspective, James Heathers etc. are “schoolmarms”—those feminized people who insist that everyone dot their i’s and cross their t’s, picky picky picky when some bold manly buccaneer like Marc Hauser publishes claims that don’t fit his data—but they’re also “terrorists” in wanting to blow up the backscratching pal-review, Ted talk, Edge foundation system that allows tenured mediocrities to maintain their reputation for brilliance.

                Like a lot of white-collar offenders, the Hausers of the scientific world are big fans of law and order (in this case, tenure and peer review) when it is used to protect their assets from the envious many, but they get annoyed by people who expect them to follow the rules.

              • Martha (Smith) says:

                Andrew said,
                “Like a lot of white-collar offenders, the Hausers of the scientific world are big fans of law and order (in this case, tenure and peer review) when it is used to protect their assets from the envious many”

                This would still stand if you left off the last t.

          • Martha (Smith) says:

            Andrew said,
            “It was the “schoolmarms” thing that really set me off. I hate that macho posturing. What next, a claim that he’s really good at opening jars?”

            There are too ways someone can be bad at opening jars. One is not having the strength, and the other is being so strong that you unintentionally break the jar rather than open it. (My grandfather, who worked at the Lincoln factory polishing metal by hand, was purportedly in the second category.)

  2. Kyle C says:

    The thought experiment known as Maxwell’s Demon was intended to illustrate the fallacy of the law of small numbers (differences in gas molecule velocities). The fallacy and the corrective have been around for quite a while.

  3. Rahul says:

    “I have no idea; this is just a theory.”

    This reminds me of a lot of Whatapp forwards I get these days, where the sender prefaces the forward with the caveat “Forwarded as received” assuaging his guilt supposedly lest he be spreading fake news or rumor mongering or some such.

    • Andrew says:


      I think there’s a difference between fake news / rumor mongering and what I’m doing, which is speculating about motives.

      Here’s an example. Suppose someone were to write, “Gelman’s full of crap. He published a false theorem, and one of his award-winning papers needed a major correction because he reverse-coded his data.” That would be just fine. The “full of crap” thing is pure opinion, and the rest is true. Now suppose this hypothetical critic were to continue, “I speculate that Gelman publishes this sort of crap because he as an unsightly obsession and wants to amass publications, and he correctly calculates that people will pretty much forget the errors. I’m not saying Gelman publishes false things on purpose, it’s just that he doesn’t care so much about accuracy, and so he’s willing to publish things that might or might not be crap, just to increase his publication count.” OK, that’s not true at all! But if the hypothetical critic labeled it clearly as speculation, then it is what it is. But if someone were to write, “Gelman’s books are riddled with errors,” or “I’ve heard that Gelman’s books are riddled with errors,” but then couldn’t back that up with examples—that would be fake news or rumor mongering. I think I’ve been pretty good about avoiding that sort of thing.

      • Rahul says:


        I’m not against the post itself.

        But I just think that the disclaimer “I have no idea; this is just a theory” is superfluous.

        • Martha (Smith) says:

          I wouldn’t say that “I have no idea; this is just a theory” is superfluous — I’d say it’s self-contracictory, because a theory is an idea.

        • Steve says:

          I thought Andrew’s remark: “I have no idea; this is just a theory” was self-deprecating. Hauser should have stuck to just theorizing and not tried to engage with quantitative analysis that he didn’t understand. Andrew is saying everything that he is saying is just a theory and therefore, since he understands quantitative science maybe he is over his skies. I thought it was very clever, but then again, I am very clever, so I could just be engaged in attribution bias.

  4. Michael Nelson says:

    “80% power was not just a slogan.”

    Most federally-funded researchers write proposals where they demonstrate 80% power prospectively, then retrospectively write reports and papers demonstrating that failure to reject is likely a result of being under-powered, or other design or implementation factors. We are rewarded for using terms like “nearly significant effect” or “non-significant but meaningful effect size.” We interpret disappointing in light of exploratory and post hoc analyses, qualitative observations and subject self-reports, evidence of program infidelity, and violations of all types of validity. Yet both the proposal reviewers and the authors KNEW BEFOREHAND that these factors would come into play and have not been accounted for in the power analysis. Had Hauser been intellectually dishonest instead of statistically dishonest, his same claims could’ve been published with almost no pushback.

    Why is one form of dishonesty allowed but not the other? Because we all know that power is “just a slogan,” a hurdle to jump over that demonstrates to reviewers that you meet a certain standard of expertise. The irony is, the best way to understand any data actually is in the context of all those factors we only acknowledge after the fact, and only if P > .05.

  5. bbis says:

    Behaviour like Hauser’s would seem to reflect a lack of understanding of and respect for the quantitative aspects of research, but it also seems to reflect a failure of imagination. Holding firmly to the initial hypothesis when the data collected suggest modification of the idea is necessary indicates a lack of flexibility and lack of ability to extend the ideas or develop new ones to fit the situation. Some of the most interesting moments in research for me have been when the data collected have been completely at odds with initial expectations and after some thought coming to the realization that the potential answer might be more interesting than the initial idea.

    If data were to speak at such a time would they say in a schoolmarmish voice – “What we have here is a failure to communicate.”

    • Andrew says:


      Yes, failure of imagination. I felt the same way about the psychologists who study ESP, power pose, embodied cognition, etc. Humans are such wonderfully complicated animals, yet when these researchers study human behavior all they can think of is crude mechanistic push-button models of the world? Of course it hasn’t helped that statisticians have spent decades hyping randomized experiments as the magic key to learning about the mythical “treatment effect.”

  6. Lee says:

    Except that there is an email record of him bullying his students into accepting falsified data. And then he tried to throw one of his students under the bus after being caught. That is hardly a matter of “statistics”.

    • Andrew says:


      Oh, yeah, Hauser did bad things, no doubt about that. He’s not the victim only of statistics; he’s also the victim of his own weaknesses and of the Harvard/NPR/Ted/Edge system that supported his toxic “great man” behavior. In addition to all that, I conjecture that his misunderstanding of statistics made things worse.

Leave a Reply