Skip to content

Ethics and statistics

I spoke (remotely) recently at the University of Wisconsin, on the topic of ethics and statistics. Afterward, I received the following question from Fabrizzio Sanchez:

As hard as it is to do, I thought it was good to try and define what exactly makes for an ethical violation. Your third point noted that it needed to break some sort of rule. Could you elaborate on this idea in the context of statistical rules? From my understanding, most statistical rules are not 0 or 1, but somewhere in between. (Removing an outlier comes to mind as an example).

He was responding to my statement that “An ethics problem arises when you are considering an action that (a) benefits you or some cause you support, (b) hurts or reduces benefits to others, and (c) violates some rule.”

I thought the bit about violating a rule was necessary because it’s generally considered acceptable to try to get more for yourself, if you’re doing so within the context of an accepted set of rules. Here I wasn’t thinking so much of statistical rules (for example, the idea that for statistical significance you need p=0.05 not p=0.06) but rather social rules. But maybe there’s more to be said on this.

The big new idea in my talk (which, unfortunately, I didn’t get to during the 20 minutes that were allocated to me) is near the end of the presentation, when I suggest that mainstream statistical methods (Bayes included) can themselves be unethical. Maybe this will the subject of a future Chance column.

P.S. One difficulty in posting slides is that they can be misleading without the accompanying speech. In particular, near the end of the slides I show the notorious third-degree polynomial regression discontinuity fit, under the headline, “Find the ethical problem!” Just to be clear, let me explain that I think the ethical problem here is not with the people who did the analysis and made the graph; rather, I think the ethical problem arises in our scientific publication system itself, which rewards dramatic claims based on statistical significance and dis-incentivizes more realistic, sober assessment of evidence. Also contributing to the ethical problem has been the publication of papers recommending something as goofy as this sort of high-degree polynomial fit.


  1. BenK says:

    Incompetence is not, in absence of self-awareness, unethical.

  2. Rahul says:

    Andrew’s first slide has this interesting thought experiment:

    “A company gives you $10,000 to assist in research with a new drug, with a promise of $100,000 more if it is successful.

    But the data are inconclusive: 20/100 deaths with the treatment, 21/102 deaths with the control. Should you: (a) look deeper for evidence that the new drug is better? (b) do an analysis you suspect is wrong? (c) do an analysis you know is wrong? (d) fake the data?”

    I think ethical dilemmas are best nipped in the bud: Refuse the $100,000 offer even before you’ve seen the first smidgen of data.

    • K? O'Rourke says:


      I would agree, accepting such a contract would be unethical regardless of what one then does.

      But that is given the context implied by “research with a new drug” where the analysis will be proffered as scientific evidence of benefit so a) and b) are obvious and c) is the subtle thing about breaking the rule that scientific evidence requires complete lack of incentives for or against findings (Feyman) e.g. failing to _blind_ in studies that could have been (preliminary studies being OK, though).

      Simply change that to, as my legal counsel I will pay you a $10,000 retainer and $100,000.00 more if we win the case, and its standard practice. Court cases are adversarial and you hire an advocate to get what you want, they are supposed to be highly biased to your case!

      I do think when statisticians use techniques that they know are inadequate, simply as its to their advantage (saves time, gets more publications, avoids arguments with reviewers, etc.) they are being unethical. In one of my experiences the explanation simply was – I used multiple regression with step wise selection to analyse that observational study you referred to me, as I had to do something that afternoon that I could bill for. They obviously did not think they had been unethical. I did and immediately stopped referring people to them.

    • Andrew says:


      Not taking the contract will indeed nip the ethical dilemma in the bud, but at a cost to you of $10,000 or $110,000. That’s a lot! And I don’t think that accepting this contract is necessarily unethical. Let me frame it slightly differently: A company gives you 100 shares of stock to assist in research with a new drug. If the drug is unsuccessful, the stock will be worth $10,000 in expectation; if the drug is successful, the stock will be worth $100,000 in expectation.

      Just to be clear, in this situation I think I’d report the results as I saw them but I’d also talk with the researchers about what they were expecting to see, etc.

      And of course it’s very difficult to avoid ethical problems of others. For example, I could write my honest report, decline the possible $100,000, and then find out the company gave the data to a new statistician who gave them the report they wanted.

      • Rahul says:


        I disagree. I think accepting such a contract is absolutely unethical. Your other options are stronger: Those border on possibly criminal conduct.

        Doing the right / ethical thing does not have to be the monetarily wise thing to do. I think the pecuniary implications of a decision ought to be orthogonal to ethical considerations.

        The fact that you stand to lose a lot of money makes it an even more critical decision ethically.

        The raison d’etre for many codes of ethics isn’t primarily to prevent actual wrongdoing (we have criminal law / torts / contracts for that); it is to obviate any opportunity / temptation or even the appearance of wrongdoing.

        • Almost every choice we make presents an ethical dilemma of whether to cheat or play fair (terms which are themselves very tricky to define).

          For example, accepting a tenure-track faculty job is unethical because it’s a gamble that pays off if and only if your research is successful. That is, you do research, which if it gets published leads to tenure (or more funding), and if not, leads to you getting fired. Given the positivity bias in publishing, this leads to exactly the same sort of bias to cook the books as being paid by a company for “results”.

          Same problem accepting any kind of consulting contract.

          Now let’s say you’re in the UK and there’s no tenure, but your department gets rated and allocated resources every N years based on things like Ph.D. student graduation rate in K years. Of course there’s incentive to push people out the door with a Ph.D. sooner than you would without that evaluation metric in place.

        • bxg says:

          That may be the rationale for many formalized codes of ethics, but do not describe ethics. One may behave fully ethically even if there is the overwhelming appearance of wrongdoing (assuming of course that in fact there is none such), and likewise if you were subject to vast temptation (assuming you resisted). An industry code of ethics might want you avoid these situations for practical reasons, but you are using “ethics” in a fashion entirely unfamiliar to me if it is _necessarily_ unethical to find oneself there.
          Some people even think that resisting temptation, and behaving rightly even when the world will anyway see you as being wrong, actually gives bonus ethics points.

      • K? O'Rourke says:


        OK if its fully disclosed

        I was interpreting the contract as being confidential and the analysis to be presented as an independent unbiased scientific opinion.

        > I think I’d report the results as I saw them
        Most academics think that (the one we surveys in the above link anyways), but most would also insist on blinding in their experiments so as to not having to worry about that intentional and especially unintentional bias.

        So the unethical part to me is the hiding of the the need to worry (and there are now rules to report such potential conflicts of interest).

        And for some people we wouldn’t worry much at all ;-)

      • Rahul says:

        Regarding the stocks question: I think accepting such stock is unethical too if it is reasonable to expect your analysis will influence the stock value significantly.

  3. Nick Menzies says:

    I think to draw concrete conclusions one would have to define the link between the statistic approach and the decision rule (i.e. are there ethical violations in the absence of action?).

    I think the worst situation is where the inclusion/exclusion of ‘no effect’ from a confidence interval (or posterior interval, for that matter) *becomes* the decision rule.

  4. Fernando says:

    According to Columbia University’s Responsible Conduct of Research website:

    A conflict of interest is a situation in which financial or other personal considerations have the potential to compromise or bias professional judgment and objectivity.

    Conflicts of interest are broadly divided into two categories: intangible, i.e., those involving academic activities and scholarship; and tangible, i.e., those involving financial relationships.

    In many health related journals authors are required to declare any conflict of interests but typically these declarations focus on tangible (i.e. financial aspects). The focus is typically on payments by corporations, etc…

    But why not declare the intangible aspects? Why not always include in this section:

    My salary, grants, promotions, professional standing, and career all depend on publishing significant findings frequently. I declare these incentives have in no way influenced the integrity of the present research.

    Or something along those lines.

    It’s too easy to always blame “greedy corporations” when in fact recent (cancer) replication studies show that corporations themselves no longer trust academic research. Instead they are turning to private labs.

    To wit, I once knew the owner of such a lab. He said his incentive was to kill the drug early. That way the client could save on further development costs and lawsuits down the line.

    • K? O'Rourke says:


      I agree about the intangible aspects.

      However, _forcing_ lying or self-delusion in “I declare these incentives have in no way influenced the integrity of the present research” may be unethical ;-)

      At most people can attest that they tried not to be influenced.

      • Fernando says:

        This was only a first cut. One might say “to the best of my knowledge and abilities, these incentives have not ….”

        In addition they could provide corroborating evidence like: “We registered a protocol before randomization, or data collection, etc… and noted any departures from protocol”; “replication data and files are available at … “; “coders where blind to treatment status”; and so on. Effectively use a checklist.

    • Rahul says:

      “Why not always include in this section” Well, if everyone has to report this every time the caveat loses its import.

      • Fernando says:


        I completely, emphatically, and wholeheartedly disagree. ;-)

        First, it is good exercise to remind ourselves and others of the incentives at play, and confront them on a daily basis;

        Second, not all researchers will have the same incentives. Some academic institutions may change incentive schemes, while other researchers at other institutions will haven different incentives and biases. This way it becomes easier to assess the evidence.

        • Fernando says:

          PS: Why put “smoking kills” in every pack of cigarettes?

          • Rahul says:

            Why not put “this kills” on sugary drinks, fatty foods & alcohol bottles?

            • Fernando says:

              Personally I think sugar is the new tobacco so just wait.

              Alcohol already contains warnings for pregnant mothers.

              NYC fast food restaurants display calories, etc…

              You know nudges, behavioral economics, and all that.

              • jrc says:

                I personally think sitting in front of your computer all day, after driving to and from work, and only walking a total of .5k in a day is the new tobacco. On the other hand, I like to old tobacco quite a bit.

                That said – I like Fernando’s disclaimer, and I agree that having to write it out is good practice. When I have to write a “conflict of interest” disclosure, I usually sit there and think about it for a minute.

        • Fernando says:

          PPS By “always include in this section” I meant more generally declarations of intangible conflicts of interests, not the same exact working. However, for 95% of academics and perhaps 90% of articles this declaration will be the exact same. But it might change over time.

  5. ” But maybe there’s more to be said on this.” There is an entire literature on ethics spanning thousands of years. So, yeah, maybe a bit more to be said.

    For a start, try to clarify what you mean by “rule” or “intent.” You get drawn into semantics, epistemology, and social convention straightaway, about which there has also been more than a bit of ink spilled over the past few thousand years.

    It’s sort of like saying “here’s the definition of a random variable, but maybe there’s more to say about probability theory.”

    • Fernando says:

      Yes. Ethics is a vast minefield. But there is a lot to be said for shooting from the hip first, asking questions later. You may reinvent the wheel but you may also tread somewhere new.

  6. Rahul says:


    What would you say if you found out that the external, independent lab which routinely tests your cafeterias for pathogen contamination was being contractually paid an incentive bonus linked to how few violations it wrote.

    • What if it were getting a contractual bonus for how few false positives it wrote? (ie. assume there is some second more expensive higher performing test that you follow up with, and assuming a false positive has undeserved negative consequences for various people… wouldn’t this kind of incentivization actually help the situation?).

      My point is, compensation that is tied to outcomes is not necessarily a bad thing, it’s when that tie to outcomes incentivizes lying that is the issue.

  7. Mikem says:

    Pre-tenure, I chose topics that were more likely to result in publications, regardless of their merit. Post-tenure, I was more discriminating.

    And as for Rahul’s (2:56 PM) point, who pays for Moody’s bond ratings? The companies they rate. Why pension funds don’t object (and pay for more objective assessments) is beyond me.

    • K? O'Rourke says:

      As far as I understand, Moody’s does not accept bonus payments for giving high ratings.

      Though I have heard concerns about statistical consultants having large positive bias (avoid killing the drug early) when dealing with small pharma companies (as there is little expectation of repeat business.)

    • Fernando says:


      Often it is pension funds themselves who pressure rating agencies to grade junk AAA. Here is the logic:

      During the real state boom CDOs gave good returns. Pension fund _managers_ want good short-term returns. But many pensions funds are limited to investing in AAA assets. So what happens? Mangers put pressure on Moody’s to rate CDOs AAA.

      Now everybody is happy. Issuers get access to a large liquid market. Pension managers get better returns (at least in short term). Moodys does well rating stuff. Pension holders are happy with high returns. And in general everyone is happy until the music stops, when someone gets the blame.

  8. Daniel Gotthardt says:


    I’m wondering why you consider “benefi ts you or some cause you support” to be a necessity to consider an (statistical) action as unethical. Maybe most ethical problems in statistical research arise from such cases, I don’t know. But I think it can also be highly unethical if you just “don’t care” or if the action considered does not effect yourself at all or even if you yourself might be harmed by your own actions. If you violate a statistical rule and also hurt others and do it just for the heck of it, that’s still an ethical issue.

    To take it even further, I also don’t know if your second premise is necessary, either. Let’s say a researcher is doing a study on something which does not really affect anybody. Perhaps some statistical research in physics might fit. Or the particular work is just so unimportant, that nobody will really care about it. Even then I would consider a violation of statistical rules to be unethical if there’s not a good reason (*) for it. I think it is an ethical issue already if it is a bad example to new generations of researchers but this might of course be considered reaping benefits for others in the long run. But at least qualifying this part not only to be about persons directly effected by the research at hand might be important, too.

    I think this is not only nitpicking, because reading those three premises jointed by an “and” might lead to the idea that if any of the three is not the case (or you think it’s not the case), everything will be fine.

    (*) What such a good reason might be and what constitutes a rule are of course additional ambiguities but I do agree with Fernando that a shot from the hip is a good thing to do and I also really like your work on Ethics and Statistics for raising issues and calling out concrete examples. I just wonder about your basic premises and if there may be any problems with them.

    • Andrew says:


      I agree that these other things can be unethical. But note above that I was not giving a definition of what is unethical, I was giving my definition of what is an ethics problem, which is somewhat different. If something is unethical and it doesn’t benefit, you can just not do it, no problem. The problem arises when you have to give something up to follow the ethical rules.

  9. Erin Jonaitis says:

    I appreciated that you took the time to speak to us, especially given how uncooperative technology was on our end. I enjoyed your presentation.

    It’s a shame there was no way for you to hear the next person who spoke, Linda Hogle from UW’s medical ethics department. She spoke on entirely different issues that have been taking up some of my spare cycles these days, namely, the ethical framework surrounding human subjects data in an era where the data of greatest interest to many research communities may be governed not by an IRB, but by corporate Terms of Service and Data Use Agreements. The Facebook emotion study I think is a good case study of how university and corporate research cultures differ (for instance, see this letter to PNAS from law professor James Grimmelmann, and also how users’ expectations of their ownership of their data may be at odds with reality.

    This is a place I think about ethics a lot because my conflict of interest is so deep: I am nosy, and I also get paid to play with data, and want to keep doing that; but I also care a lot about privacy and consent, and at the end of the day that has to matter more to me. The Facebook brouhaha also shone a light on the importance of your point (c) — many people I saw participate in the debate reached for established norms about IRB approval and so forth, but having heard Linda’s talk I have the sense that IRBs are actually fumbling as much as anyone. At some point someone has to make the rules; and if you’re lucky enough to live in a time when power dynamics are shifting quickly in important ways, that someone might well be you.

    I wish I saw statisticians talking about this issue. I mean, sure, some of us work on geological data or whatever, but anyone who works with human subjects data ought to care where it came from, and whether the donor consented, and under what duress.

  10. […] Commenter Fernando on Andrew Gelman’s Statistical Modeling, Causal Inference and Social Science […]

  11. dmk38 says:

    I don’t think rules are necessary are useful here (or anywhere else for that matter).

    Why not simply,

    It is unethical for a researcher to deliberately do anything in reporting data that would cause a reasonable & informed reader to assign the results a higher degree of weight than the reader would if the researcher had *told* the consumer that that was what the researcher had done?

  12. Mayo says:

    “the ethical problem here is not with the people”?
    I don’t know this particular example but in general I think it’s absurd and, frankly, unethical to blame a publication or other incentive system for bad science, fraud, cheating,….It’s much like blaming tantalizing ads for fancy cars for robbing a bank.

    • Erin Jonaitis says:

      I disagree. I think of it in terms of selection bias. If your career path sets up an incentive system that involves gameable metrics, and then weeds out most of the entrants, it seems to me you will wind up with a pool of people in that career that is biased toward cheaters. What an individual does is largely up to the individual, yes, but what the community looks like is at least partly a product of the rules guiding that community, and what ethical people decide to do when encountering those rules and their intersection with reality.

      Now, you can argue that all metrics are gameable and I won’t really have a good rejoinder. Are there any professional communities known for producing an unusual number of upstanding people? Maybe it’s worth looking at those career ladders if so. I’d rather have that kind of selection bias in my field.

Leave a Reply