Risk aversion is a two-way street

“Risk aversion” comes up a lot in microeconomics, but I think that it’s too broad a concept to do much for us. In many many cases, it seems to me that, when there is a decision option, either behavior X or behavior not-X can be thought as risk averse, depending on the framing. Thus, when people talk about risk-aversion, they’re not really saying much of anything at all. I link to the famous vase/faces illusion to reinforce this point that, for any given decision, what is risk averse and what is risk seeking can flip back and forth, but once you have one framing in mind, it can be hard to keep the other one in mind.

I’ve written about this several times on the blog but maybe not in the past year or so.

The topic arose today when I read a post by Tyler Cowen on a book by Greg Ip called “Foolproof: Why Safety Can Be Dangerous and How Danger Makes Us Safe,” whose theme is described as follows:

How the very things we create to protect ourselves, like money market funds or anti-lock brakes, end up being the biggest threats to our safety and wellbeing.

I’m sympathetic to this general argument (see for example, sections 5 and 6 of this paper from 1998, the sections on Utility of money and risk aversion and What is the value of a life) and I’m guessing that I’d like Ip’s book.

But one of the examples, excellent in itself, illustrates the disturbing two-way nature of risk aversion. Here’s the example:

By Spellberg’s reckoning, the odds of an adverse reaction to an antibiotic, such as an allergic reaction, are about 1 in 10, whereas the odds that someone will suffer because antibiotics were wrongly withheld are about 1 in 10,000. Nonetheless, most physicians do not want to run the risk of letting a patient suffer when an antibiotic could help . . . His research in Nepal produced the depressing finding that antibiotic resistance was highest in communities with the most doctors.

When I started reading this example, given Cowen’s description of the book, I thought he was going to say that people are too worried about allergic reactions, that allergic reactions are super-rare and people should be less risk-averse, less worried about absolute safety, and instead they should do the rational thing and just take the damn drug.

Actually, though, the message in this example turned out to be the opposite: we are told that taking the drug (or, more specifically, prescribing the drug, as the focus seems to be on the doctor’s decision) is the overly-safety-concerned option that is actually a bad idea.

Either way, the message is clear: evaluate costs and benefits and make an informed decision. So in that sense the framing in terms of risk aversion is irrelevant. (Cowen does not actually use the term “risk aversion” in his discussion of Ip’s book, but that’s the only way I can think of interpreting the above discussion.)

My point is not to claim this researcher is wrong about the antibiotics—I have no idea about that—but rather that risk aversion is such a flexible framework that it can be applied in just about any direction. Using an antibiotic is risk averse because you’re taking the drug just in case. Not using an antibiotic is risk averse because you’re scared of the possible adverse reactions. Vaccinating your kid is risk averse because you’re worried about measles. Not vaccinating your kid is risk averse because you’ve heard it might cause autism. Increasing your savings in the stock market is risk averse because you’re insuring yourself against inflation, or keeping your savings in cash is risk averse because the stock market might crash. Driving is risk averse because you’re scared of flying; flying is risk averse because you’re scared of driving.

This doesn’t work for me.

P.S. This stuff is controversial: the commenters to the above-linked post pretty much uniformly disagree with Cowen on the antibiotics question.

48 thoughts on “Risk aversion is a two-way street

  1. At least within economics, risk aversion ha a very technical definition: a situation in which preferences over lotteries are such that agents are willing to accept a lower expected-value payoff in exchange for greater certainty that they’ll actually receive that payoff. Using this framework it’s pretty easy to figure out what’s risk aversion and what’s not: run the statistics, and whenever people make EV negative choices because they’re afraid of uncertainty, call that risk aversion

    • Rishab:

      No, that’s not right. See section 5 of the above-linked paper for an explanation as to why you shouldn’t interpret a discinclination to play lotteries as an indication of curvature in the utility function. The orders of magnitude are just all wrong

    • +1

      I always thought of risk aversion as choosing a guaranteed $40 rather than face the uncertainty of a $0-$100 coin flip game with an expected payoff of $50.

      Is that not right?

        • Or, to put it another way, yes that behavior is risk aversion—I have no problem with that description—but it should not be interpreted as a curving utility function for money. The problem in the econ literature is not with the identification with risk aversion as this disinclination to gamble; the problem arises when this is taken as a statement about a utility function.

          The point of my post above is that risk aversion takes many forms, and in many settings involving options A and B, either A or B can be interpreted as risk aversion.

        • Thanks.

          So I guess people should stick to using risk aversion in the strict sense; and not stretch it to other usages where the meaning becomes fuzzy & vague?

        • Rahul:

          Actually, I’d put it the other way: I think that risk aversion is inherently a psychological concept that is necessarily a bit vague. Talking about risk aversion is fine with me (although I’d prefer some more precision by distinguishing between uncertainty aversion and loss aversion); I think the problem is in the implicit assumption that risk aversion has some strict, rigorous definition.

        • Vague concepts are ok if this is merely qualitative discussion.

          But if any of this is amenable to quantitative modelling why prefer vagueness over rigor?

        • Rahul:

          Lots of attitudes are vague but they’re part of life, hence worth studying from a psychological view. Consider some psychological states such as happiness, annoyance, etc.; traits such as generosity, self-absorption, etc.; actions such as trolling, flattery, etc.; or motivations such as risk-aversion, desire for closure, etc. All of these are real (in the sense of being shared and recognizable) but all are vague.

          It’s possible to quantitatively model any of things but the quantitative model will necessarily capture only part of the story. I don’t object to quantitative modeling (of course I don’t; I’m a statistician!); what I object to is to take a particular quantitative model and define it as the whole thing. “Risk aversion” is an example of where a particular quantitative model has misled many, to the extent that people need to be reminded of examples such as section 5 in that paper of mine, over and over.

          To put it another way: Yes to quantitative modeling of risk aversion, No to letting a single, very restrictive, model get in the way of more serious modeling and understanding of the phenomenon.

        • To be fair, models incorporating risk aversion as curvature in a utility function have served as an important jumping off point for developing more general representing functions (like the famous prospect theory). Unfortunately risk aversion (defined in the intuitive sense of requiring deterministic payoff of K to be strictly preferred to binary gambles with expected value of K) would seem to imply a great deal less about these more general functional forms than when you’re willing to assume the very simple functional form of expected-utility theory.

        • Andrew:

          If you look at “risk aversion” as merely a soc. sci. phenomenon to be studied I guess vagueness is OK.

          OTOH, if you use the concept to guide decisions, regulation or policy-making or as the basis of further model building I see vagueness as a bug & not a feature.

        • If people wish to explore some kind of risk aversion that doesn’t fit into the artificially constrained definition, don’t they usually just call it something else (e.g., “ambiguity aversion,” “uncertainty aversion?”)

        • @Sam

          I agree with your comment. The right solution is to use different terms not to make one single term fuzzy by reusing it in various ambiguous senses.

        • Rahul:

          I agree. But Sam wrote, “If people wish to explore some kind of risk aversion that doesn’t fit into the artificially constrained definition, don’t they usually just call it something else (e.g., “ambiguity aversion,” “uncertainty aversion?”)”. And the answer to that question is No, economists use “risk aversion” to mean all sorts of different things. Sam’s idea is fine as a normative suggestion but not as a description of how economists, or social scientists more generally, use the language. The term “risk aversion” is overloaded. Hence my above post.

        • My problem with the example is that I think the students are “tricked” into accepting that it’s reasonable to be indifferent between the following outcomes for any $x

          a certain amount of money $x

          a 55% chance of a $x+10 and a 45% chance of $x-10

          I think if you they really thought about large values of $x and asked if they were a billionaire would they be more inclined to make a positive utility bet they would say yes. However, by phrasing the question as being about a “gain of $x” it encourages them to conceptualize it as a small quantity consistent with their usual budgetary constraints.

        • Peter:

          I don’t know that anyone’s being tricked. Uncertainty aversion and loss aversion seem to be pretty robust findings. I do agree, though, with the point that utility analysis is a useful way for people to examine the inconsistencies in their preferences and decisions.

    • “Sounds bogus”

      More to the point, it sounds exactly like the “seat belts cause reckless driving” arguments we’ve been seeing for years. This looks like yet another regurgitation of the_Armchair_Economist_

    • Rahul:

      To be fair, the claim is that “the very things we create to protect ourselves” are the biggest threats. Anti-lock brakes are, I assume, just an example of the type, not an important example. Here’s a bigger example (if you buy the general argument): The US government spends zillions on the military. Why? To keep us safe (among other reasons). But an argument could be made that much of this spending is making us less safe. Activities done in order to reduce risk, can create risk.

      This last statement is so general as to be empty; my point is just that the “biggest threats” claim is not obviously wrong.

      • Andrew,

        While it is possible to frame “the very things we create to protect ourselves” claim so as to be not just reasonable, but obviously true, I doubt that’s what’s going on here. I haven’t read Ip’s book but, given his present employer, Cowen’s inclinations, the prominence of brakes the book’s homepage and the enduring popularity of the “safety regulations don’t make us safe” genre (remember the bicycle helmet story?), I’m betting readers are in for another moral-hazard-based anti-regulation tract.

        • I vaguely recall reading through the book previously; I don’t remember it being too intolerably overwrought on the policy side of things. The general theme was as Andrew described; it’s a book that shuffles through a large number of examples arguably demonstrating a general tendency of humans to adapt to new risk-reducing technologies by increasing their own behavior’s contribution to risk (most often gaining something else in turn, like shorter trip times, though in some cases – as with forest fires – not as unambiguously improving anything). I’m not sure it was effective as an argument that ‘“the very things we create to protect ourselves” are the biggest threats,’ but it certainly leaves the reader believing that technology designed without taking into account adaptive behavior from users may have a lot’ve curiously unintended effects, and in particular may not yield the desired safety. In extreme circumstances it also certainly seems possible that safety (or whatever criterion) could actually decrease, although I don’t recall Ip’s book giving any forceful examples of that sort.

    • “Biggest threat” is an exaggeration, but there is an argument that safety features encourage people to take unnecessary risks: if you didn’t have anti-lock brakes, you might drive more cautiously. (I’ve done some work in industrial safety engineering, and this is a very controversial topic.)

      • My opinion is people compensate but not fully. i.e. There’s no basis for risk homeostasis.

        ABS may cause some people to drive faster / riskier but not to the point where the net benefit is negated.

        • I’m pretty agnostic on this argument, but a lot of the evidence that people compensate for safety features with riskier behaviors is based on suspicious data similar to the classic “helmets increased the rate of head wounds in WWI.” Specifically, it has been claimed that car accident injury rates went up after the introduction of seat belts, and my recollection is that this claim has been criticized on similar grounds as the “helmets cause head wounds” fallacy.

  2. I believe a concave utility function is sufficient, but not necessary, for risk aversion as economists define it. It may be unfortunate that economists call this particular concept “risk aversion” but there are many poorly named concepts in economics, including utility, welfare, efficiency, etc. I don’t think risk aversion, as defined by economists, is really ambiguous at all – I think your objection is more that risk aversion, as defined by economists, is not a very good guide to decision making under uncertainty. Section 5 of your paper is a well known demonstration that people behave “irrationally” and that particular example (of inconsistency in people’s choices) has always bothered me as a label of irrational behavior. Faced with a sure $1 payoff and a very small chance of a very big payoff, I and almost all humans will pick the gamble. It is easy to then find inconsistency in decisions about other, more reasonable, gambles. I have never thought this to be “irrational or inconsistent.” Those terms only have meaning within the assumptions economists use about what constitutes “rational behavior.”

    I think the more pertinent objection is about the proper way to view decision making under uncertainty. We have the Kahneman and Tversky “loss aversion” as one alternative to the economists’ “risk aversion.” We also have the recent book, “Willful Ignorance” by Herbert Weisberg, which argues that probability itself is too one dimensional. He would have ambiguity restored to a prominent place alongside probability to describe uncertain situations. I’m sure there are other approaches – although I don’t believe any has achieved universal acceptance. The economics use of risk aversion can be a productive assumption about behavior, just as “rational” can result in testable hypotheses about behavior. In fact, both assumptions work sometimes (in terms of predicting behavior that is consistent with those assumptions) and fail miserably at other times.

    So, all I am objecting to is your statement that risk aversion is an ambiguous concept. In my mind, it is not ambiguous, but it may not be useful as a behavioral assumption. The examples like prescribing antibiotics or not would be unambiguous if we all agreed on the probabilities and payoffs associated with and without taking the antibiotics. It is the fact that we probably can’t agree on either the probabilities or the payoffs that I believe Weisberg was writing about in Willful Ignorance. The choice cannot (or should not) be reduced to a comparison of two probabilities (that of a cure and that of a side effect) and two payoffs.

    • Dale:

      One problem is that economists use the term “risk aversion” in different ways. Sometimes they are referring to a curved utility function, other times they are talking about aversion to risk in the way that we use the term in English.

    • There’s a fair bit of work running around the generalized expected utility literature right now that features models of ambiguity as its centerpiece. I’ve often seen it described by taking the infimum of a representing function over a convex set of probability distributions, for example.

      A lot’ve that work seems to remain trapped (some notable exceptions aside within its own subliterature, however; I’m not sure what exactly it needs to kick its popularity from the theoretical single-individual world to the broader realms of microeconomics, macroeconomics, and game theory.

  3. My favorite example of the dual nature of risk aversion is fixed prices. A long term contract to pay $x per month for a given quantity of some commodity is sometimes called the risky choice and sometimes the safe choice. When prices go up (or are thought to be likely to go up) locking in long-term, even at a rate higher than current rates, is thought to be “insurance.” On the other hand, paying the current market price for the rest of time could also be thought of as risk minimizing, in the sense that you’re always paying what the average guy is paying.

    So fixed rate mortgages are often called less risky when, so long as you can afford to easily bear any upward spikes in rates, adjustable rate mortgages have proven to be much cheaper and eliminate any possibility that you will be stuck paying a high rate without an expensive refinancing.

    When utility customers are locked into long-term electricity rates, regulators often criticize the utility for not giving people short-run flexibility. But flexibility is just another word for tolerating variance, which can be preferable. The confusion lies in thinking people hate fluctuation per se, rather than hating prices that move against them in a substantive way.

  4. I (uncharacteristically) agree (generically) with Tyler’s commenters on this one, I don’t like how that example is framed.

    On the one hand we have odds of an adverse reaction conditional on taking antibiotics as about 1 in 10. From what I’ve found, that number is in the right range, but most of those reactions are minor GI disturbance, skin rashes, etc. Risk of life-threatening reactions like severe anaphylactic allergy is more like 1 in 10,000.

    On the other hand, we have a probability that antibiotics will be wrongly withheld given that the patient has a condition that would benefit from antibiotic treatment.

    Those aren’t measuring the same thing, the former is independent of the rate that antibiotics are prescribed (since it’s conditional on taking antibiotics), while the latter is dependent on the rate of prescriptions for different conditions.

    AIUI, it’s generally accepted that antibiotics are over-prescribed for conditions where they have minimal likelihood of benefit, sometimes even as a kind of potentially harmful placebo. However, also AIUI, conditions where antibiotics are highly likely to help are generally pretty unambiguous (there are tests for bacterial infections), so the rate of antibiotics being wrongly withheld in cases where they would be clearly beneficial is small because doctors are unlikely to make mistakes in those cases.

    So there’s a general argument that antibiotics are overprescribed, but that’s useless for weighing the costs and benefits of any particular prescription, as the tradeoff has huge variability depending on the specific condition.

  5. Bottom line for me is that “risk aversion” is a term that falls into the category of “When you give something a name, you feel you understand it (even if you don’t).”

    Comments above that point to ways people “feel they understand” a phenomenon they call “risk aversion”:

    1) Andrew’s point: it is often helpful to refer to the more specific concepts of “loss aversion” and “uncertainty aversion.” (People often don’t distinguish the two, so “feel” they understand.)

    2)Dale has mentioned Weisberg’s opinion that “ambiguity” needs to be “restored to a prominent place alongside probability to describe uncertain situations.”(Ignoring this distinction is another way of “feeling” one understands.)

    3) Dan points out that “odds of an adverse reaction conditional on taking antibiotics” is “independent of the rate that antibiotics are prescribed (since it’s conditional on taking antibiotics),” while “probability that antibiotics will be wrongly withheld given that the patient has a condition that would benefit from antibiotic treatment” is “dependent on the rate of prescriptions for different conditions.” Neglecting such countervailing risks, and the possibility that they may be independent, can be another way of “feeling” one understands.

    Here’s my attempt to describe my own take, synthesizing the above points and adding more:

    a) Usually when there is a risk of something, there is another kind of risk which runs counter to the first. So talking about “risk averse” doesn’t make sense in any general sense (e.g., as a characteristic of a person); we need to specify “aversion to what specific risk?”

    b) This aversion might be to loss (risk of loss) or to uncertainty (not knowing whether we will win, lose, or whatever other possibilities there might be). So again, we need to be talking about a specific situation and a specific type of risk to make any sense.

    c) In any specific “possible risk” situation, individuals vary greatly in what they value — this influences which of two competing risks they would prefer to take.

    d) But risk isn’t necessarily a binary choice — risky things are often risky in varying degrees. So “risk aversion” starts sounding really silly: At what degree of a risk does someone become averse? Or maybe we need to talk about degrees of risk aversion (for a specific risk — or dimension of risk)?

    e) If we indeed talk about degrees of risk aversion — well, this is undoubtedly influenced by different people’s values (as in (c)).

    f) And if we talk about degrees of risk, we then see we need to consider what degree of risk A would outweigh what degree of a competing risk B? Well — that’s also a matter of different people’s values.

    g) And talking about just two competing risks might be an oversimplification — there might be several competing and/or interacting risks involved. (e.g., in a medical example, cost might also be a factor.”

    (Not sure if this is all coherent; I’m getting tired — but I hope I’ve made a point — if no other, than at least that “risk averse” seems like a silly concept.)

    • My working definition is:

      “People are willing to pay a significant premium for a low variance alternative when faced with multiple choices that all have the same expected value of payoff but different variances.”

  6. I think the example illustrates how the notion of salience interacts with risk-aversion. See http://qje.oxfordjournals.org/content/127/3/1243.abstract.
    Certain outcomes are more salients which could lead agents to “overweight” their effect of utility. In this example, the allergic reaction to the anti-biotic isn’t as salient as the suffering from the illness in the absence of anti-biotic, therefore agents tend to prefer taking the anti-biotic despite the probabilities of each event occurring. Salience is related to but different from probability overweighting and downside risk-aversion, both of which could also explain the example you gave.

    • +1

      But I’d add: A lot of people take the antibiotic just because the doctor prescribed it; so the practical question is how the physician weighs the pros and cons. Salience could be a factor; so could promotion by drug companies (which does affect salience); habit could be a factor; making (possibly true or possibly false) assumptions about patient preferences could be a factor.

      • “Just because the doctor prescribed it” is a sensible reason: she’s a doctor! You might be less risk-averse than she is, but what degree of confidence do you have in your judgment that it’s all risk-aversion, and is your estimate of that confidence itself reasonable?

        The Dunning-Kruger result tells us that ignorant people are like that precisely because they don’t know how ignorant they are.

        • ““Just because the doctor prescribed it” is a sensible reason: she’s a doctor! ”

          I’m not convinced that “she’s a doctor” is necessarily a sensible reason. For example:

          1. I’ve had experiences of trusting a doctor when I shouldn’t have.

          2. I am willing and able to look at package inserts online and weigh the evidence they give.

          3. I also (am fortunate to) have the ability to access published research articles and read them critically (and am aware that a lot of drug research doesn’t get published).

          4.In deciding whether or not to take antibiotics in particular, one risk I weigh is that overuse/unnecessary use of antibiotics helps promote antibiotic-resistant strains of bacteria. (Some physicians do this; others don’t.)

    • We know that most people are bad at understanding and working with probability, and their intuition often fails them. Why in the world should we expect any satisfactory understanding of “risk”, which is a far more complex idea? Or of any useful policy decisions? That’s even if one could even get adequate data.

      In the field of project or program management, “risk” is often taken to be a combination of probability and impact. If either is zero, the risk should be zero, and if both are one (i.e., the maximum, normalized to one), the risk is max, i.e., one. The easiest way to get this behavior is to use powers:

      R = I^a * P^b

      In practice, a and b are often taken to be unity, though I tend to like b = 0.5 myself.

      But whether this formulation, however intuitive, leads to useful results is another question. The one largish project I was involved with that tried this approach, using many estimates of I and P, just got a spread of sort of middle-of-the-road risk values, that were not very illuminating or helpful.

      Because of all the above, I don’t have much confidence on any economist’s claims about consequences of “risk-aversion”, and especially if it isn’t used in a limited, well-defined technical sense.

  7. I think the vagueness of risk aversion has parallels with the precautionary principle in risk regulation. Essentially, whether a policy option is considered precautionary or not depends entirely on how the decision is framed. Cass Sunstein has written a lot on this.

    Having said that, there are some technical definitions of precaution that seem to escape this ambiguity (e.g. Rawls’ minimax), although this is rarely explicitly used in regulation, and also carries its own problems.

  8. Andrew, your interpretation in this comment thread of the examples generated in section 5 of the linked paper is incorrect.

    As Peter Gerdes alluded to above, the assumption that risk preferences are constant across all levels of wealth is an extremely strong assumption and it is no way a requirement of utility theory. Therefore, your conclusion that “the problem arises when this is taken as a statement about a utility function” is not correct. Your conclusion should be “the problem arises when this is taken as a statement about a utility function that is assumed to have constant curvature over all wealth levels,” which is a much weaker statement.

    • Evan:

      No, I’m not at all saying that risk preferences are constant across all levels of wealth. I’m just giving an example that is typical of people’s risk preferences (a decided preference for $x, as compared to a 50/50 chance of $x – $10 and $x + 10) and demonstrating that this cannot be derived from any reasonable utility function. Take any set of preferences like this and you’ll have the problem. A constant level of risk aversion is not required; that just makes the math simpler. The same issue will arise as long as there is a clear preference for the certain outcome in all these decision problems.

      • I didn’t read the linked paper, but what Andrew’s describing sounds like a theorem I believe is attributed to Matthew Rabin, and it definitely does not require constant curvature. See this abstract/paper:

        “This paper provides a theorem showing that expected-utility theory is an utterly implausible explanation for appreciable risk aversion over modest stakes: Within expected-utility theory, for any concave utility function, even very little risk aversion over modest stakes implies an absurd degree of risk aversion over large stakes”

        http://users.nber.org/~rosenbla/econ311/syllabus/rabincallibration.pdf

        • I think Rabin’s description of the underlying intuition is especially valuable:

          “The intuition for such examples, and for the theorem itself, is that within the expected-utility framework turning down a modest-stakes gamble means that the marginal utility of money must diminish very quickly for small changes in wealth. For instance, if you reject a 50-50 lose $10/gain $11 gamble because of diminishing marginal utility, it must be that you value the 11th dollar above
          your current wealth by at most 10/11 as much as you valued the 10th-to-last-dollar of your current wealth. Iterating this observation, if you have the same aversion to the lose $10/gain $11 bet if you were $21 wealthier, you value the 32nd dollar above your current wealth by at most 10/11 * 10/11 = 5/6 as much as your 10th-to-last dollar. You will value your 220th dollar by at most 3/20 as much
          as your last dollar, and your 880th dollar by at most 
          1/2,000 of your last dollar. This is an absurd rate for the value of money to deteriorate — and the theorem shows the rate of deterioration implied by expected-utility theory is actually quicker than this. Indeed, the theorem is really just an algebraic articulation of how implausible it is that the consumption value of a dollar changes significantly as a function of whether your lifetime wealth is $10, $100, or even $1,000 higher or lower. From such observations we should conclude that aversion to modest-stakes risk has nothing to do with the diminishing marginal utility of wealth”

          A geometric rate of decrease in the marginal value of money is indeed curiously fast.

          Bears an interesting parallel to the time-value-of-money as well, where exponential discounting is the standard economic model, but is known to be inconsistent with robust experimental evidence on.. well, time inconsistency.

          Perhaps geometric rates aren’t so great a model for most psychological phenomena…

        • Phil:

          Yes, it’s definitely worth this being spelled out. What can be can be clear to a statistician in the context of a class discussion, is not always clear in print, so it’s good that Yitzhak explained all the steps.

Leave a Reply to Andrew Cancel reply

Your email address will not be published. Required fields are marked *