Skip to content

Slipperiness of the term “risk aversion”

I don’t like the term “risk aversion” (see here and here). For a long time I’ve been meaning to write something longer and more systematic on the topic, but every once in awhile I see something that reminds me of the slipperiness of the topic.

For example, Alex Tabarrok asks, “Why are Americans more risk averse about medicine than Europeans?” It’s a good question, and it’s something I’ve wondered about myself. But I don’t know what he’s talking about when he says that “the stereotype is that Americans are more risk-loving” than Europeans. Huh? Americans are notorious for worrying about risks, with car seats, bike helmets, high railings on any possible place where someone could fall, Purell bottles everywhere, etc etc. The commenters on Alex’s blog are all talking about drug company regulations, but it seems like a broader cultural thing to me.

But I’m bothered by the term “risk aversion.” Why exactly is it appropriate to refer to strict rules on drug approvals as “risk averse”? In a general English-language use of the words, I understand it, but it gets slippery when you try to express it more formally.

I understand what Alex is saying–people are afraid of the risk of an adverse drug reaction, with this fear being “risk averse” rather than simple rational prudence if the cost of the risk aversion outweighs, in expectation, the risk being avoided. (After all, we don’t call it “risk averse” to avoid going down Niagara Falls in a barrel. The idea of “aversion” is that one is evaluating a tradeoff using a rule that is more stringent than the calculation of expected values.)

Still, it’s tricky to refer to this as “risk aversion” in a general sense. In the drug-approval context, there are two risks–the risks from an adverse drug reaction, and, on the other side, the risk of something bad happening that could’ve been prevented by taking the drug. It’s risk vs. risk. What if someone said we should approve just about every drug, so as to avoid the risk of some otherwise-preventable condition? That would be risk-averse in another way, right?

This stance might seem fanciful, but I actually think it’s pretty common, if you shift the context just slightly. Having done some (academic) work on pest control, I’ve learned that the most effective method of reducing home roach infestation is to clean the place, put poison in the cracks in the walls, and seal the cracks. “Bombing” the apartment doesn’t really do the trick. It kills some roaches but then the others come back. And this is beyond whatever poisoning you might get from the pesticide that’s sprayed all over.

Nonetheless, people just love, love that bombing. Every month in our building they put up a list asking who wants their apartment bombed, and lots of people sign up. (And, beyond these individual choices, there’s an institutional choice to bomb people’s apartments for free. Nobody’s offering to clean and seal our apartments for free.) Every month they do it, so I’m pretty sure the roaches are coming back.

To get back to the main point of discussion, this behavior can be viewed as risk-seeking or risk-averse. Risk-seeking because people are taking on a risk of being exposed to poison and basically getting nothing out of it. Or, risk-averse because people are willing to do something pretty extreme to avoid the risk of roach exposure. In general, the “take a pill for it” or “bomb it” attitude can be seen as risk-averse. Or not, depending on how you look at it.

I guess what I’m trying to say is that the original question–different attitudes toward drug approval and risky behavior among people in different places–is fascinating. I just don’t think “risk aversion” is a useful way of framing it. As I noted above, I’d like to write something more general on this topic, once I can think of the right way of putting it.


  1. Thorfinn says:

    I'm not sure I understand the complaint. Risk aversion is about the degree of greater expected utility needed as compensation for bearing greater uncertainty.

    So in the drug example, this issue is exactly about competing risks. Someone with no risk aversion would (roughly) equate the deaths from the lack of treatment with the deaths from treatment. Someone who let in all drugs, leading to many deaths through treatment, is risk-loving. Alex is complaining that the FDA dramatically overweights the deaths from treatment. Calling this "risk aversion in reverse" illustrates that this behavior can be silly (at least in this context, where a death is a death), not that the concept is wrong.

    This is exactly equivalent, I think, to the tradeoff between Type 1 and Type 2 errors. Focusing on having a low low p-value is like being risk-averse.

  2. Andrew Gelman says:


    I don't actually think the Type 1 and Type 2 error framework makes any sense in any of the application areas I've ever worked on (see here for more on the topic).

    But to get back to the main point on risk aversion: no, I don't think there's any utility function here at all, that's my point. In your example above, it's not clear at all whether allowing more drugs corresponds to having more or less uncertainty about deaths or unpleasant side effects or whatever.

    I also strongly oppose the idea of "risk aversion" being used in situations when there's no risk at all, as in the notorious $20/$30/$40 example (see section 5 of this article from 1998).

    I can't imagine that this short blog entry and article will convince you–I'm battling a pretty entrenched idea here–but I hope it will at least persuade you that these ideas aren't as obvious as you might have thought!

  3. derek says:

    How meaningful is it for an American to talk about what Americans are "notorious" for? Don't you just mean "this is something we like to tell each other now and then"?

    I'm particularly having a hard time picturing us Europeans saying to each other, "Hey, how 'bout those risk-averse Americans, with their freaky car seats?"

  4. manuelg says:

    Yes, I think I understand what you are saying. Converting your wealth to *any* basket of goods has risk. Turn all your wealth into gold, fearing inflation, and you are badly situated for a Mad Max Carmageddon rapid societal collapse. Turn all your wealth into gasoline and Chevys, and you are badly situated for any other possible world. Any action carries risk, any bout of inaction carries risk. So I understand your point to be: instead of having the cultural norms pick which risks count and which risks don't count, and describing some actions as "risk averting", rigor demands specifying the risks for all actions and also for inaction, and specifying how you rank or discount risks relative to each other.

  5. Elizabeth says:

    You could just describe the same phenomena by saying that Americans have stronger consumer protections, perhaps for historically contingent reasons (the consumer movement led by Ralph Nader), perhaps deeper cultural or structural ones. That might influence the way Americans vs. Europeans perceive the magnitude of certain kinds of risks, as well, without bleeding into a more general "risk-averse" culture.

  6. Jeremy Miles says:

    I'd never thought of it like that – but risk averse is a bit of a useless phrase. Of course we're risk averse, everyone is risk averse. It's like saying pain averse, or death averse – it's true of almost everyone.

    I'm not American, I'm from the UK, but I've lived in the US for 3 years, and I've never noticed that Americans are risk averse – they (on average) seem to accept some risks (not wearing a seatbelt, allowing guns in national parks) more than other risks (rails on high places; Purell in every meeting room) .

    Given how hard it is to understand and balance various risks, if there are differences, I'd suggest it was one of communication and comprehension of those risks.

    (I just searched my hard disk, to see if I'd ever used the term risk-averse [at least since my last hard disk failure in 1997] and I'm pleased to say that I haven't. :)

  7. Jonathan says:

    It's far from surprising that a guy who doesn't believe in utility functions doesn't like the definition of risk aversion, since one is defined in terms of the other.

    Risk aversion is defined in terms of your own preferences, which includes your own understanding of the probabilities, not reality. Blame von Neumann and Morgenstern.

  8. ssendam says:

    One of the irritating things about economics is that it produces phrases that have some specific technical meaning, and are just close enough to the ordinary English meaning of the term to cause serious confusion – risk, uncertainty, ambiguity, technology, productivity…

  9. Eliot says:

    I agree with Elizabeth and Jeremy — Americans and Europeans seem to care about different risks. Genetically modified organisms: huge issue in Europe, total unconcern in the U.S. Riding a bicycle without a helmet in insane European city traffic: unthinkable to an American, routine for a European. I see no simple main-effect difference in overall tolerance of risk.

  10. Jonathan says:

    As an economist who does his work with "the public," I have great sympathy for ssendam's comment. And to be even more fair, several economics disputes can trace their origins to terms whose technical definitions and memorable cognate have become hopelessly confused. The opposite of "risk" isn't "upside," it's "certainty." (But even that's very loosely speaking.)

  11. Andrew Gelman says:

    Jonathan: You are giving the conventional definition of risk aversion as given in decision analysis and econ textbooks. But I'm arguing that this conventional definition doesn't make a lot of sense. And I don't think you can so easily "blame von Neumann and Morgenstern." Von Neumann and Morgernstern were great, and I love teaching that stuff–but the real question is, how applicable is it to real life?

  12. Paul says:

    I think a lot of the issue comes down to a conflation of risk preference and risk perception. I could be risk averse, but if I'm incorrect in assessing the likelihood of a bike crash still skip a helmet. Given a perception of risks involved, I'd tend to define risk aversion as the penalty a person assigns to the variance in results. Thus, in the medicine example, stopping the release of a drug would often be risk averse, because there's generations of statistics of the death rate connected with various diseases, and thus we can be more certain what would happen if we skip a drug then if we release a new one (where our knowledge is based on an extremely smaller set of clinical trials).

    At the same time, if you release a blanket rule of "approve any drug at all" you've got so little data that risk perception is probably playing the largest role. If you think drugs almost always help, then a lot of risk aversion could be outweighed by your higher perceived expected value.

    So on the one hand, yes, I'd say risk preference is a valid and useful concept; on the other hand, in most situations you won't be able to tease it apart from risk perceptions and all the myriad factors that make up our relative weighting of our options

  13. Jared says:

    Elke Weber, right there at Columbia, has done a bunch of research on Risk Perception versus Risk Aversion, e.g.,

    With that said, in terms of the roach example, that would seem geared towards the more emotional defintion of risk (e.g. discussion of dread in P. Slovic's work).

    The crossroads between psychology and economics inherent in the topic certainly makes for some pretty interesting questions.

  14. Andrew Gelman says:

    Yes, exactly. I think people are making a big mistake when they conflate a psychological concept of risk with a particular mathematical model that typically does not make much sense.