Skip to content

Should we talk less about bad social science research and more about bad medical research?

Paul Alper pointed me to this news story, “Harvard Calls for Retraction of Dozens of Studies by Noted Cardiac Researcher: Some 31 studies by Dr. Piero Anversa contain fabricated or falsified data, officials concluded. Dr. Anversa popularized the idea of stem cell treatment for damaged hearts.”

I replied: Ahhh, Harvard . . . the reporter should’ve asked Marc Hauser for a quote.

Alper responded:

Marc Hauser’s research involved “cotton-top tamarin monkeys” while Piero Anversa was falsifying and spawning research on damaged hearts:

The cardiologist rocketed to fame in 2001 with a flashy paper claiming that, contrary to scientific consensus, heart muscle could be regenerated. If true, the research would have had enormous significance for patients worldwide.

I, and I suspect that virtually all of the other contributors to your blog know nothing** about cotton-top tamarin monkeys but are fascinated and interested in stem cells and heart regeneration. Consequently, are Hauser and Anversa separated by a chasm or should they be lumped together in the Hall of Shame? Put another way, do we have yet an additional instance of crime and appropriate punishment?

**Your blog audience is so broad that there well may be cotton-top tamarin monkey mavens out there dying to hit the enter key.

Good point. It’s not up to me at all: I don’t administer punishment of any sort; as a blogger I function as a very small news organization, and my only role is to sometimes look into these cases, bring them to others’ notice, and host discussions. If it were up to me, David Weakliem and Jay Livingston would be regular New York Times columnists, and Mark Palko and Joseph Delaney would be the must-read bloggers that everyone would check each morning. Also, if it were up to me, everyone would have to post all their data and code—at least, that would be the default policy; researchers would have to give very good reasons to get out of this requirement. (Not that I always or even usually post my data and code; but I should do better too.) But none of these things are up to me.

From Harvard’s point of view, perhaps the question is whether they should go easy on people like Hauser, a person who is basically an entertainer, and whose main crime was to fake some of his entertainment—a sort of Doris Kearns Goodwin, if you will—. and be tougher on people such as Anversa, whose misdeeds can cost lives. (I don’t know where you should put someone like John Yoo who advocated for actual torture, but I suppose that someone who agreed with Yoo politically would make a similar argument against, say, old-style apologists for the Soviet Union.)

One argument for not taking people like Hauser, Wansink, etc., seriously, even in their misdeeds, is that after the flaws in their methods were revealed—after it turned out that their blithe confidence (in Wansink’s case) or attacks on whistleblowers (in Hauser’s case) were not borne out by the data—these guys just continued to say their original claims were valid. So, for them, it was never about the data at all, it was always about their stunning ideas. Or, to put it another way, the data were there to modify the details of their existing hypotheses, or to allow them to gently develop and extend their models, in a way comparable to how Philip K. Dick used the I Ching to decide what would happen next in his books. (Actually, that analogy is pretty good, as one could just as well say that Dick he used randomness not so much to “decide what would happen” but rather “to discover what would happen” next.)

Anyway, to get back to the noise-miners: The supposed empirical support was just there for them to satisfy the conventions of modern-day science. So when it turned out that the promised data had never been there . . . so what, really? The data never mattered in the first place, as these researchers implicitly admitted by not giving up on any of their substantive claims. So maybe these profs should just move into the Department of Imaginative Literature and the universities can call it a day. The medical researchers who misreport their data: That’s a bigger problem.

And what about the news media, myself included? Should I spend more time blogging about medical research and less time blogging about social science research? It’s a tough call. Social science is my own area of expertise, so I think I’m making more of a contribution by leveraging that expertise than by opining on medical research that I don’t really understand.

A related issue is accessibility: people send me more items on social science, and it takes me less effort to evaluate social science claims.

Also, I think social science is important. It does not seem that there’s any good evidence that elections are determined by shark attacks or the outcomes of college football games, or that subliminal smiley faces cause large swings in opinion, or that women’s political preferences vary greatly based on time of the month—but if any (or, lord help us, all) of these claims were true, then this would be consequential: it would “punch a big hole in democratic theory,” in the memorable words of Larry Bartels.

Monkey language and bottomless soup bowls: I don’t care about those so much. So why have I devoted so much blog space to those silly cases? Partly its from a fascination with people who refuse to admit error even when it’s staring them in the face, partly because it can give insights into general issues and statistics and science, and partly because I think people can miss the point in these cases by focusing on the drama and missing out on the statistics; see for example here and here. But mostly I write more about social science because social science is my “thing.” Just like I write more about football and baseball than about rugby and cricket.

P.S. One more thing: Don’t forget that in all these fields, social science, medical science, whatever, the problem’s is not just with bad research, cheaters, or even incompetents. No, there are big problems even with solid research done by honest researchers who are doing their best but are still using methods that misrepresent what we learn from the data. For example, the ORBITA study of heart stents, where p=0.20 (actually p=0.09 when the data were analyzed more appropriately) was widely reported as implying no effect. Honesty and transparency—and even skill and competence in the use of standard methods—are not enough. Sometimes, as in the above post, it makes sense to talk about flat-out bad research and the prominent people who do it, but that’s only one part of the story.


  1. Dale Lehman says:

    Yet another reason to add to the “retire statistical significance” list – if it carriers so much weight for publications, grants, etc. then it enhances the incentive to cheat. Researchers with ample incentive and ability to cheat translates to “some will do so.” This in no way excuses their behavior, but we should expect such behavior under these conditions. Open data will help limit the ability; retiring statistical significance may limit the incentive. More is needed certainly, but these two things seem easier than altering the human behaviors (biological or social) that lead to cheating.

  2. Clyde Schechter says:

    Though it’s tangential, let’s talk about Yoo.

    First, to be clear, in my opinion, Yoo is a war criminal who should be sent to the Hague for judgment.

    That said, nothing that I’m aware of in Yoo’s work qualifies as *scientific* misconduct. Interpreting the law is not science; it is about “ought,” not about “is.” Having depraved opinions is an entirely different matter from misrepresenting facts.

    This is also reflected in the ethical precepts guiding different professions. The ethics of science mandate trying to be as truthful as we can be. We are urged to present the data as completely and fiarly as we can, not cherry-pick our preferred results, and we are expected to modify our understanding of the world in accordance with all the information that is available to us, in search of truth. To the extent that we fail to do this, wittingly or otherwise, we fall short of expectations and impede the progress of science.

    Lawyers, however, do not apply themselves to seeking truth. They are bound to advocate for their clients’ interests. One might think that this is a despicable system (I largely do, but I also see a small kernel of merit in it), but that is what it is. Lawyers are perfectly free, except perhaps when prosecuting criminal cases, to present only the evidence that favors their side and sequester anything contrary. Lawyers are perfectly free to argue for any interpretation of law they think advances their case, limited only by whether the court will find it too ludicrous. Indeed, the only constraint I’m aware of that binds them is that they are not permitted to fabricate evidence or suborn perjury. It’s an entirely different professional culture and ethics.

    • Andrew says:


      Yes, Yoo is a lawyer, but he is also a legal academic. I think that a professor’s academic responsibilities apply even when he’s not working at the university. For example, I do statistical consulting. In the real world, lots of statistical consulting is done in a slanted way, pulling out whatever argument can be made to support one side. I don’t think I should do that. I mean, I don’t think I’d do that even if I was a consultant and nothing but, but I really don’t think it’s appropriate given my academic affiliation. Similarly, I don’t think it’s appropriate for a law professor “to present only the evidence that favors their side and sequester anything contrary,” even when he is acting outside his direct university responsibilities. I have no problem with Yoo or anyone else writing a polemic, but even there I think he should present his concerns as well as his arguments.

      • I think when a lawyer has a specific client, their dominant ethical limitations are to do what is best for that client. As an academic doing pure research and opinion writing about the law, you don’t have a specific client whose interests you are bound to advocate for. In some languages lawyer and advocate are basically the same word right?

        I’m not saying this really in reference to Yoo, as I didn’t really follow his role as being discussed, but I can see that there is a potentially conflict of interest between doing what is best for the client and doing what is logically correct or best for society or whatever. To take things to extremes, obviously if you’re a lawyer representing a murderer who has confessed to you, you are absolutely obligated to defend them effectively, even though you’re fully aware that the best thing for society would be for you to lose. It makes for good fiction right? Michael Connelly uses this pretty effectively in the Mickey Haller series.

    • Martha (Smith) says:

      Your point that different professions have different standards of ethics is well-taken. I agree with your description of the ethics of science, but think it needs to be supplemented by the observation/awareness that not all people who consider themselves scientists agree with this description. Thus, one area where we need to improve is to incorporate more explicit teaching of the ethics of science (in both science and statistics teaching), and perhaps more explicit use of the word “unethical” in critiquing science practices that do not come up to these standards.

    • gdanning says:

      I think you might be letting Yoo off the hook a little easily. The ABA Model Rules of Professional Responsibility state that ” A lawyer shall not knowingly … fail to disclose to the tribunal legal authority in the controlling jurisdiction known to the lawyer to be directly adverse to the position of the client and not disclosed by opposing counsel.” They also state, “in an ex parte proceeding, a lawyer shall inform the tribunal of all material facts known to the lawyer that will enable the tribunal to make an informed decision, whether or not the facts are adverse.”

      Also, “A lawyer shall not bring or defend a proceeding, or assert or controvert an issue therein, unless there is a basis in law and fact for doing so that is not frivolous, which includes a good faith argument for an extension, modification or reversal of existing law.”

      Moreover, when counseling a client, “a lawyer shall exercise independent professional judgment and render candid advice. In rendering advice, a lawyer may refer not only to law but to other considerations such as moral, economic, social and political factors, that may be relevant to the client’s situation.”

      So, Yoo was not free to simply come up with any old argument to justify what Bush wanted to do.

      PS: Please note that I don’t mean to opine re whether or not Yoo violated these ethical rules.

  3. Paul Alper says:

    From the NYT article
    “Some scientists wondered how a questionable line of research persisted for so long. Maybe, Dr. Molkentin said, experts were just too timid to take a stand.
    But what about those companies selling stem cell treatments for the heart?
    “People wanted to believe,” he said.”
    No mention made about the monetary incentive.

  4. Al says:

    “A related issue is accessibility: people send me more items on social science, and it takes me less effort to evaluate social science claims.”


    I think this issue extends to teaching as well. I work with medical doctors and yet if I look back at most of my presentations explaining some of the issues of “bad research” I frequently use examples from social science. The medical literature is full of the same problems, but identifying the problems requires some domain knowledge that I would need to briefly explain (and make sure I explain it correctly, in fear of outing myself as a non-medical expert).

    A pizza example is preferable to an interleukin-10 example for the purposes of explaining bad research practices.

  5. Imaging guy says:

    “Also, I think social science is important. It does not seem that there’s any good evidence that elections are determined by shark attacks or the outcomes of college football games, or that subliminal smiley faces cause large swings in opinion, or that women’s political preferences vary greatly based on time of the month………..”

    Can United States elections be determined by foreign influence (e.g. Russia) through Facebook and other social media as many Democrats believe? Can United States influence popular uprisings and demonstrations against autocratic and dictatorial governments (CIA is often accused as the bogeyman)?

    • Andrew says:


      Yes, I think political actions can determine elections and that popular uprisings and demonstrations can be planned. Not always, and not reliably—lots of campaign strategies don’t work, and people plan lots of uprisings that never happen—but sometimes. Political campaigning, including surreptitious actions, are real. I think the claims about shark attacks and subliminal smiley faces are implausible, because (a) the claim is not just that these effects can happen, but that the effects are large and predictable, and (b) there’s a big difference between intentional strategies designed to affect an election or an uprising, as compared to irrelevant stimuli that are purported to have large indirect effects.

  6. Paul Alper says:

    For those interested in cotton-top tamarin monkeys:

  7. Dalton says:

    On a similar note, maybe we should also talk more about bad statistics driving policy and management decisions even or especially if it does not involve peer-reviewed research.

    In my previous job, I did a lot of work on environmental contamination and the regulatory process of addressing it. This involves things like mercury/lead/PCB (etc.) contamination in soil/water/sediment (etc). I quit, in part because I was in a junior position and was often ignored when I pointed out the atrocious statistical methods being used because I did not have an established reputation. The process of environmental investigation and remediation in the United States is based on a series of adversarial relationships (e.g. between the regulator and regulated, the litigator and litigated). There is inherent distrust, but no decision maker I ever encountered on either side of the table had the statistical training to appropriately judge a statistical argument. As a result, a weird sort of consensus has emerged around a the reliability of set of statistical practices recommended by the USEPA and actively promulgated through their in-house software ProUCL (UCL stand for upper confidence limit). While the EPA itself does not mandate the use of ProUCL, many state agencies do and often won’t accept more sophisticated analyses because they don’t have the staff or resources to review them.

    So what is ProUCL? It might as well be a random number generator, but what it aims to do is to estimate the upper confidence limit of the mean concentration of some chemical based off an often limited number of samples. A typical use case might involve 10 to 15 soil/fish/sediment/water samples from a contaminated site which is analyzed in a lab for the one or more (up to hundreds) potential contaminants. There is often measurement error (left-censoring). The data are more often than not right-skewed. These data get cut and pasted into ProUCL which runs through a (hilarious, if it weren’t so sad) process of dozens to thousands of null hypothesis tests. Outlier tests (based on normal assumptions), distribution tests (normal, gamma or lognormal), it also calculates about a dozen versions of an upper confidence limit and recommends one based on the results of the previous hypotheses tests. Removal of “outliers” is weirdly encouraged. There is so, so much wrong with the software, but also so, so much resistance to any alternative analysis.

    I left this job with exactly zero confidence in the process of environmental remediation. So, NYC folks, please limit your consumption of bass from the Hudson.

Leave a Reply to Martha (Smith)