“Eureka bias”: When you think you made a discovery and then you don’t want to give it up, even if it turns out you interpreted your data wrong

This came in the email one day:

I am writing to you with my own (very) small story of error-checking a published finding. If you end up posting any of this, please remove my name!

A few years ago, a well-read business journal published an article by a senior-level employee at my company. One of the findings was incredibly counter-intuitive. I looked up one of the reference studies and found that a key measure was reverse-coded (e.g. a 5 meant “poor” and a 1 meant “excellent”). My immediate conclusion was that this reverse coding was not accounted for in the article. I called the author and suggested that with such a strange finding, they should check the data to make sure it was coded properly. The author went back to the research partner, and they claimed the data was correct.

Still thinking the finding was anomalous, I downloaded the data from the original reference study. I then plotted the key metric and showed that the incorrectly coded data matched what was published, but that the correctly coded data matched the intuition. I sent those charts to the senior person. The author and research partner double-checked and confirmed there was an error in their reporting. So far so good!

After confirming the error, the author called and asked me “What are you trying to accomplish here?”. I responded that I was only trying to protect this senior person (and the company), because if I found the error somebody else would find it later down the line. The author, however, was suspicious of why I took the time to investigate the data. I was puzzled, since it appeared it was the research partner who made the fundamental error and the author’s only fault was in not diving into a counter-intuitive result. In the end, the graph in question was redacted from the online version of article. And, as you by now would certainly expect, the author claimed “none of the conclusions were materially impacted by the change”.

Do you have a name for this phenomenon in your lexicon yet? Might I suggest “eureka bias”? Meaning, when somebody is well-intentioned and discovers something unique, that “eureka moment” assumes a supremely privileged status in the researcher’s mind, and they never want to abandon that position despite evidence to the contrary…

My reply: Hey, I reverse-coded a variable once! Unfortunately it destroyed my empirical finding and I felt the need to issue a correction:

In the paper “Should the Democrats move to the left on economic policy?” [Ann. Appl. Stat. 2 (2008) 536–549] by Andrew Gelman and Cexun Jeffrey Cai, because of a data coding error on one of the variables, all our analysis of social issues is incorrect. Thus, arguably, all of Section 3 is wrong until proven otherwise.

We thank Yang Yang Hu for discovering this error and demonstrating its importance.

Regarding your general question:

“Eureka bias,” yes, that’s an interesting idea. I’ve written about this, I can’t remember where, and I think you’re right. Sometimes there’s confirmation bias, when someone does a study and, no surprise!, finds exactly what they were looking for, apparently all wrapped in a bow with statistical significance as long as you ignore the forking paths (as in that famous ESP paper from a few years back).

Other times, though, a researcher is surprised by the data and then takes that surprise as confirming evidence, with the implicit reasoning being: I wasn’t even looking to see this and it showed up anyway, so it must be real. At that point the researcher seems to become attached to the finding and doesn’t want to give it up, sometimes going to extreme lengths to defend it and even to attack and question the motives of anyone who points out problems with their data, as we see in your example above and we’ve seen many times before in various contexts.

So, yes, “Eureka bias” it is.

P.S. Check out the “please remove my name!” above. I hear this sort of thing from whistleblowers all the time, and it’s my impression that there’s a lot of bullying done against people who go to the trouble of uncovering inconvenient truths about purportedly successful research. Remember how Marc Hauser treated his research assistants? Remember how that psychologists applied the “terrorists” label to people who were pointing out errors in published research? There’s a good reason that Pubpeer allows anonymous comments. Not all scientists respond well to criticism; some will attack anyone who they see as devaluing their brand.

11 thoughts on ““Eureka bias”: When you think you made a discovery and then you don’t want to give it up, even if it turns out you interpreted your data wrong

  1. I once came up with a gene mapping algorithm. I tried it on simulated data (where the true location of the gene was known) for one case, as a quick test. It worked *beautifully.* I gave a talk where I showed that one result, carefully mentioning that, well, it was just one result.

    It never did that again! It was actually a poorly performing algorithm that had gotten very lucky on the first case.

    It took me a long time to accept this, but I eventually did. I mean, the thing didn’t work, there was no getting around it. But I could run unlimited simulations to make sure of this. Someone with limited supplies of data might be more likely to get stuck forever….

    • I once excitedly reported a tentative result from a MCMC fit of a “variable” change-point model to my advisor that I looked at right before the meeting, surprised that 100% of the posterior draws showed the change at a single level. After the meeting I looked through it and saw that I had hard-coded the change-point (from when I was trying to work out the syntax) and hadn’t removed that hard-coding. No wonder 100% of the draws reported that value!

  2. Andrew wrote: “Check out the “please remove my name!” above. I hear this sort of thing from whistleblowers all the time, and it’s my impression that there’s a lot of bullying done against people who go to the trouble of uncovering inconvenient truths about purportedly successful research.” and “There’s a good reason that Pubpeer allows anonymous comments. Not all scientists respond well to criticism; some will attack anyone who they see as devaluing their brand.”

    This is correct.

    I am since April 2015 working together with others to get retracted a fraudulent study on the breeding biology of the Basra Reed Warbler in a Taylor & Francis journal (Al-Sheikhly et al. 2013, 2015), see for details https://osf.io/5pnk7/ and https://www.researchgate.net/project/Retracting-fraudulent-articles-on-the-breeding-biology-of-the-Basra-Reed-Warbler-Acrocephalus-griseldis High profile researchers are in the meanwhile involved in this case.

    I have received heavily personal attacks, including severe ad hominem attacks and a public humiliation. My response is always the same:

    (1): provide all authors of Porter et al. (2015a&b, https://www.tandfonline.com/doi/full/10.1080/09397140.2015.1023424 and
    https://www.tandfonline.com/doi/full/10.1080/09397140.2015.1023426 ), and others, full and unlimited access to the full set raw research data of the fraudulent study;

    (2): provide us with reviews / comments of experts (within this field of research) who rebut / refute any of the findings of the report “Final investigation on serious allegations of fabricated and/or falsified data in Al-Sheikhly et al. (2013, 2015”, see https://osf.io/vbdw8/ , together with the name and the contact details of these experts and with a statement that the reviews / comments of these experts can be published on the public part of the internet, together with the name and the contact details, and with a statement that these experts are willing to have a dialogue with all authors of Porter et al, and others, about their review / comment.

    The response on these queries is always similar. No response on the scientific queries and no dialogue about these scientific queries.

  3. Check out the “please remove my name!” above. I hear this sort of thing from whistleblowers all the time, and it’s my impression that there’s a lot of bullying done against people who go to the trouble of uncovering inconvenient truths about purportedly successful research.

    I do not know about whistleblowers in the research area but whistleblowers, in governments, are typically, ignored, attacked, have their careers sidetracked or ruined, get kidnapped and so on.

    I don’t see any compelling reason to think it would be different in the research area either in or outside of academia.

    • Jrkrideau:

      Yes, as James Heathers puts it, criticism is “simply bad for business.” One thing that’s struck me in the whole psychology academics business is how rarely, if ever, the defenders of the status quo ever criticize any published work in their field, no matter how bad it is. You don’t even hear them criticizing Daryl Bem’s ESP work! Himmicanes, air rage, ovulation and voting, even that debunked work on the “critical positivity ratio,” you name it, not matter how shaky the claims, it’s all good. I can’t imagine all these people really believe all these things; they just want to support their team. And anyone who disagrees with them is the enemy.

      • Really cool link, in particular this seems to explain some of my experiences – “It’s the kind of environment where Researcher A publishes something that’s Truly Awful And Wrong, Researcher B points out the problems in a commentary, and then Researcher A writes a formal response which says how interesting but anyway Researcher B probably wears dirty socks oh and look over there! A bird! and then the whole thing is expected to go away.”

        And the other hand, like for guides for doing and reporting studies better, they can be gamed to do the exact opposite (deftly avoid meaningfully responding to criticism).

    • There are some reasons to think it would be worse in academia as

      1. There is a lot of secrecy and confidentiality (blinded reviews, committees, etc.) that enables bad behavior. For instance, a colleague of mine was repeatedly encouraged by a senior colleague to always use them as a reference. There were so bad, in fact outrageously bad, that someone leaked one of the confidential “recommendation letters” about them for a job application back to them.

      2. There is no access to information in academia that I am aware of. Court orders for academic research are hard to get and carry out. In Canada, often whats gets seen using legal obtained materials cannot to divulged to the public.

      3. Investigators often have more bargaining power than university administration. For instance, if they are repeatedly obtaining multi-million dollar yearly funding the university gets a cut and if that ends the university may be stuck with covering severance pay of their employees and other expenses for a while afterwards.

      4. There likely little to no penalty if found out. At most likely having to put up with this labeling “Brilliant people often have personality clashes and unfortunately and this likely was just a case of that. On the other hand, 35 prominent colleagues have stated that they are indebted to this researcher’s kindness and support in their work.”

      It’s nothing new: “Newcomb studied mathematics under Benjamin Peirce and the impecunious Newcomb was often a welcome guest at the Peirce home.[4] However, he later was said to develop a dislike of Peirce’s son, Charles Sanders Peirce and has been accused of a “successful destruction” of C. S. Peirce’s career.[5] In particular, Daniel Coit Gilman, president of Johns Hopkins University, is alleged to have been on the point of awarding tenure to C. S. Peirce, before Newcomb intervened behind the scenes to dissuade him.[6] About 20 years later, Newcomb allegedly influenced the Carnegie Institution Trustees, to prevent C. S. Peirce’s last chance to publish his life’s work, through a denial of a Carnegie grant to Peirce, even though Andrew Carnegie himself, Theodore Roosevelt, William James and others, wrote to support it.[7]” https://en.wikipedia.org/wiki/Simon_Newcomb#Peirce_family

  4. “I’ve written about this, I can’t remember where …”

    http://statmodeling.stat.columbia.edu/2016/07/14/about-that-claim-that-police-are-less-likely-to-shoot-blacks-than-whites/
    “… I do want to comment on “the most surprising result of my career” thing. We should all have the capacity for being surprised. Science would go nowhere if we did nothing but confirm our pre-existing beliefs. Buuuuut . . . I feel like I see this reasoning a lot in media presentations of social science: “I came into this study expecting X, and then I found not-X, and the fact that I was surprised is an additional reason to trust my result.” The argument isn’t quite stated that way, but I think it’s implicit, that the surprise factor represents some sort of additional evidence. In general I’m with Miller that when a finding is surprising, we should look at it carefully as this could be an indication that something is missing in the analysis.”

  5. Neal Radford critiqued a climate study here: https://radfordneal.wordpress.com/2016/01/10/critique-of-debunking-the-climate-hiatus-by-rajaratnam-romano-tsiang-and-diffenbaugh/

    Favorite quote: “You might therefore be surprised that, as I will discuss below, this paper is completely wrong. Nothing in it is correct. It fails in every imaginable respect.”

    Some of the comments to his critique illustrate the problems discussed here.

Leave a Reply to Klaas van Dijk Cancel reply

Your email address will not be published. Required fields are marked *