Loss aversion etc

If a person is indifferent between [x+$10] and [55% chance of x+$20, 45% chance of x], for any x, then this attitude cannot reasonably be explained by expected utility maximization. The required utility function for money would curve so sharply as to be nonsensical (for example, U($2000)-U($1000) would have to be less than U($1000)-U($950)). This result is shown in a specific case as a classroom demonstration in Section 5 of a paper of mine in the American Statistician in 1998 and, more generally, as a mathematical theorem in a paper by my old economics classmate Matthew Rabin in Econometrica in 2000.

I was thinking about this stuff recently because of a discussion I had with Deb Frisch on her blog. I like Matt’s 2000 paper a lot, but Deb seems to be really irritated by it. Her main source of irritation seems to be that Matt writes, “The theorem is entirely ‘nonparametric,’ assuming nothing about the utility function except concavity.” But actually he assumes fairly strong assumptions about preferences (basically, a more general version of my [x, x+$10, x+$20] gamble above), and under expected utility, this has strong implications about the utility function.

Matt’s key assumption could be called “translation invariance”–the point is that the small-stakes risk aversion holds at a wide range of wealth levels. That’s the key assumption–the exact functional form isn’t the issue. Deb compares to a power-law utility function, but expected-utility preferences under this power law would not show substantial small-scale risk aversion across a wide range of initial wealth levels.

Deb did notice one mistake in Matt’s paper (and in mine too). Matt attributes the risk-averse attitude at small scales to “loss aversion.” As Deb points out, this can’t be the explanation, since if the attitude is set up as “being indifferent between [x+$10] and [55% chance of x+$20, 45% chance of x]”, then no losses are involved. I attributed the attitude to “uncertainty aversion,” which has the virtue of being logically possible in this example, but which, thinking about it now, I don’t really believe.

Right now, I’m inclined to attribute small-stakes risk aversion to some sort of rule-following. For example, it makes sense to be risk averse for large stakes, and a natural generalization is to continue that risk aversion for payoffs in the $10, $20, $30 range. Basically, a “heuristic” or a simple rule giving us the ability to answer this sort of preference question.

Attitudes, not preference or actions

By the way, I’ve used the term “attitude” above, rather than “preference.” I think “preference” is too much of a loaded word. For example, suppose I ask someone, “Do you prefer $20 or [55% chance of $30, 45% chance of $10]?” If he or she says, “I prefer the $20,” I don’t actually consider this any sort of underlying preference. It’s a response to a question. Even if it’s set up as a real choice, where they really get to pick, it’s just a preference in a particular setting. But for most of these studies, we’re really talking about attitudes.

9 thoughts on “Loss aversion etc

  1. Your result is not a special case of Rabin’s calibration theorem. Your result is equivalent to his. The difference is that:

    a. you did it in four paragraphs instead of twenty pages and

    b. you did not misinterpret it as evidence of loss aversion

    c. you understood that it might call into question both the normative and descriptive validity of EU. Rabin assumed it challenged EU's descriptive validity but not its normative status.

    You showed that assuming:

    f(x+10)=.55*f(x+20)+.45*f(x), for all x

    leads to absurd predictions.

    Rabin showed that assuming:

    f(x)=.5*f(x+11)+..5*f(x-10), for all x

    leads to absurd predictions.

    I think that you both showed that assuming that

    U(x)=c-k^x where 0 is less than k is less than 1

    leads to absurd predictions. You showed that the “negative exponential function” is not a plausible utility function. You showed that constant absolute risk aversion (CARA) is implausible, to use the terminology of Arrow (1971).


    Utility functions that exhibit constant relative risk aversion (CRRA) such as the power function in cumulative prospect theory are more plausible.


    AG: Matt's key assumption could be called "translation invariance"–the point is that the small-stakes risk aversion holds at a wide range of wealth levels. That's the key assumption–the exact functional form isn't the issue.

    DF: The point is that “translation invariance” is equivalent to a functional form that approaches an asymptote very quickly, crazily quickly. The equivalence between a behavioral invariance and a mathematical function is not very "accessible" (Kahneman, 2003).

    But functions that exhibit CRRA such as the power function used in prospect theory do not exhibit “translation invariance” and do not lead to crazy predictions.

    You could say something like:

    Empirically, people seem to exhibit “translation invariance.” However, it is impossible to capture this behavioral regularity in a “utility function” without making absurd predictions.

    Therefore, translation invariance is empirical evidence against the normative and descriptive validity of EU.

  2. Deb,

    Thanks for the comments. Regarding Matt's paper vs. mine, it's a matter of style. The sort of argument that might convince a decision theory class might not be rigorous enough to convince an econometrician. Conversely, Matt's proof might be too abstract for the students in my class.

    Regarding translation invariance, it's not really my place to say if people "should" or "shouldn't" express indifference between [x+$10] and [55% chance of x+$20, 45% chance of x]. My point (and Matt's) is that such attitudes (I wouldn't call them "preferences," for reasons discussed in my blog entry above) cannot be explained by a curving utility function.

    This is not meant to dismiss power laws or other curving utility functions, just to show very dramatically that classical utility theory does not explain translation-invariant risk-aversion.

    As I discuss here, I think utility analysis makes much more sense for "institutional" than for "personal" decisions, and I think it's unfortunate that decision theory textbooks tend to focus on the "personal decision" aspect.

  3. AG: Regarding Matt's paper vs. mine, it's a matter of style. The sort of argument that might convince a decision theory class might not be rigorous enough to convince an econometrician. Conversely, Matt's proof might be too abstract for my course.

    DF: You and I agree that the Rabin/Gelman paradox is not due to loss aversion. This is a big deal. One reason the economic establishment felt comfortable awarding Kahneman the prize in 2002 was that they believed that Rabin had shown mathematically what K&T had shown empirically – EU is descriptively inadequate because people are loss averse in addition to being risk averse.

    How are the Gelman and Rabin paradoxes related? You seem to think yours is a special case of his or that his is more rigorous or abstract than yours. I disagree.

    Gelman paradox (1998, p. 171-172):




    G3= $41

    G4=.5 $1 billion/.5 $0

    If G1=G2 for all x, then G3>G4.

    Rabin paradox [Rabin & Thaler (2001, p. 223); Rabin 2000a, p. 1285); Rabin (2000b, p.204)]:

    2000a=Econometria paper

    2000b= chapter in Kahneman & Tversky’s Choices, Values and Frames.


    G1=status quo




    If G1>G2 for all G1, then G3>G4.

    There are several differences between the paradoxes.

    1. Gelman keeps the wealth level constant (current status quo) and changes the size of the gamble. Rabin keeps the size of the gamble constant and changes the wealth level.

    2. RP is more absurd than GP. R’s DM rejects a gamble with an INFINITE payoff. G’s DM rejects a gamble with a finite payoff.

    3. RP is more general than GP. R provides many concrete examples (DM always rejects 50-50 win $105/lose $100 will reject 50-50 win infinity/lose $1000) while G provides just one.

    4. RP assumes G1>G2. GP assumes G1=G2.

    5. I fall for the RP but not the GP.


    1. I am not sure of the implications of varying the frame. They are psychologically different – they seem to involve different types of invariance. But mathematically, I think they are equivalent.

    2. You could modify the GP to make it just as absurd (i.e. reject infinity) as the RP. I bet if you changed Gelman’s G3 to $100, that’d do it.

    3. You could generalize the GP by varying the probabilities and the size of the “error” in the payoff.

    4. GP assumption is MUCH more tractable with no decrease in generality of paradox.

    5. This is strange to me. I feel pretty sure that I would never want to toss a coin where I could win $11 and lose $10. But the Gelman paradox doesn’t do it for me. As x increases, I start to prefer the gamble. I’d rather have $10 than a 55% chance of $20. But I’d rather have a 55% chance of $1010/45% chance $9990 than $1000 for sure.

  4. I think people typically think about this sort of problem by rewriting: [x+$10] and [55% chance of x+$20, 45% chance of x], into a different framework. First they mentally take the $10, then think about the choice of: [55% chance of x+$10, 45% chance of x-$10] and [Do Nothing].

    This rewritten problem looks like loss aversion right?

  5. Maybe Tim. But you get the same paradox for a decision maker who always prefers $1 to a 50% chance of $3. Do you want to say that the reason people prefer $1 to a 50% chance of $3 is loss aversion?

  6. I don't see why not. Loss Aversion/Regret theories seem to be the most plausible explainations for why someone would make such a choice. Are you and/or Andrew offering a different theory? The discussion you are having is getting rather difficult to follow at this point.

  7. Tim,

    I agree that a plausible explanation for translation-invariant risk attitudes is that people "rewrite" the problem. And I agree that loss aversion is a plausible explanation for risk-aversion in the rewritten formulation.

    But I would not consider loss aversion to be a reasonable explanation for the original problem of [x+$10] and [55% chance of x+$20, 45% chance of x] for positive values of x. There are simply no losses here, so I can't see loss aversion as an explanation.

    "Uncertainty aversion" is a possible explanation, but, just introspecting on the problem, I don't buy it. To me, the decision to prefer x+$10 is more of an application of a general rule favoring risk-aversion, rather than a preference associated with fear of loss or uncertainty.

    These distinctions could make a difference if, for example, one is considering different ways of presenting some policy options (for example, should you pay to measure your house for radon gas, or how much of a rebate should you get for switching to an insurance policy with less coverage). There is some evidence that losses are perceived differently than gains, depending on how problems are framed. One of the reasons why discussions like this can go in circles is because problems like "do you prefer the $10 or the 60% chance of $20" are too abstracted from actual decisions.

  8. I think there may be a problem here with risk aversion being confounded with time preference. I can easily imagine refusing a gamble with 50% chance of winning $11 and 50% chance of losing $10 for a wide range of values for my total wealth. The reason is simply that if I lose the $10, I may not have enough cash to pay for lunch, and will have to take a detour to the bank, which costs me a lot more than $1 in time lost. Similarly, when people contemplate a gamble with a potential loss of $1000, they may be averse to it because they have arranged their current affairs in such a way that to absorb the loss they'd have to cash in a GIC or make some other costly adjustment in order to pay this month's rent. If you looked at gambles that will be settled in a year's time, the results might be different, since the year's lag would give plenty of time to adjust to optimally accomodate the loss. (Of course, you'd then have the cost of keeping the delayed bet in mind for a year, which is probably also greater than $1.)

    Radford Neal

  9. Radford,

    You're giving a reasonable rationale for loss aversion. Concerns like yours are why I defined the problem in terms of $0,$10,$20; $10,$20,$30; $20,$30,$40; etc.–thus, all in terms of gains. It may be that people mentally subtract to place all these problems at zero, but if so, that's a problem-solving strategy they're using, not a statement about inherent preferences or utilities of $40,$50,$60, or whatever.

Comments are closed.