Skip to content
 

“It’s as if you went into a bathroom in a bar and saw a guy pissing on his shoes, and instead of thinking he has some problem with his aim, you suppose he has a positive utility for getting his shoes wet”

The notion of a geocentric universe has come under criticism from Copernican astronomy. . . .

A couple months ago in a discussion of differences between econometrics and statistics, I alluded to the well-known fact that everyday uncertainty aversion can’t be explained by a declining marginal utility of money.

What really bothers me—it’s been bothering me for decades now—is that this is a simple fact that “everybody knows” (indeed, in comments some people asked why I was making such a big deal about this triviality), but, even so, it remains standard practice within economics to use this declining-marginal-utility explanation.

I don’t have any econ textbooks handy but here’s something from the Wikipedia entry for risk aversion:

Risk aversion is the reluctance of a person to accept a bargain with an uncertain payoff rather than another bargain with a more certain, but possibly lower, expected payoff.

OK so far. And now for their example:

A person is given the choice between two scenarios, one with a guaranteed payoff and one without. In the guaranteed scenario, the person receives $50. In the uncertain scenario, a coin is flipped to decide whether the person receives $100 or nothing. The expected payoff for both scenarios is $50, meaning that an individual who was insensitive to risk would not care whether they took the guaranteed payment or the gamble. However, individuals may have different risk attitudes. A person is said to be:

risk-averse (or risk-avoiding) – if he or she would accept a certain payment (certainty equivalent) of less than $50 (for example, $40), rather than taking the gamble and possibly receiving nothing. . . .

They follow up by defining risk aversion in terms of the utility of money:

The expected utility of the above bet (with a 50% chance of receiving 100 and a 50% chance of receiving 0) is,
E(u)=(u(0)+u(100))/2,
and if the person has the utility function with u(0)=0, u(40)=5, and u(100)=10 then the expected utility of the bet equals 5, which is the same as the known utility of the amount 40. Hence the certainty equivalent is 40.

But this is just wrong. It’s not mathematically wrong but it’s wrong in any practical sense, in that a utility function that curves this way between 0 and 100 can’t possibly make any real-world sense.

Way down on the page there’s one paragraph saying that this model has “come under criticism from behavioral economics.”

But this completely misses the point!

It would be as if you went to the Wikipedia entry on planetary orbits and saw a long and involved discussion of the Ptolemaic model, with much discussion of the modern theory of epicycles (image above from Wikipedia, taken from the Astronomy article in the first edition of the Enyclopaedia Brittanica), and then, way down on the page, a paragraph saying something like,

The notion of a geocentric universe has come under criticism from Copernican astronomy.

Again, this is frustrating because it’s so simple, it’s so obvious that any utility function that curves so much between 0 and 100 can’t keep going forward in any reasonable sense.

It’s an example I used to give as a class-participation activity in my undergraduate decision analysis class and which I wrote up a few years later in an article on classroom demonstrations.

I’m not claiming any special originality for this result. As I wrote in my recent post,

The general principle has been well-known forever, I’m sure.

Indeed, unbeknownst to me, Matt Rabin published a paper a couple years later with a more formal treatment of the same topic, and I don’t recall ever talking with him about the problem (nor was it covered in Mr. Cutlip’s economics class in 11th grade), so I assume he figured it out on his own. (It would be hard for me to imagine someone thinking hard about curving utility functions and not realizing they can’t explain everyday risk aversion.)

In response, commenter Megan agreed with me on the substance but wrote:

I am sure it has NOT been well-known forever. It’s only been known for 26 years and no one really understands it yet.

I’m pretty sure the Swedish philosopher who proved the mathematical phenomenon 10 years before you and 12 years before Matt Rabin was the first to identify it. The Hansson (1988)/Gelman (1998)/Rabin (2000) paradox is up there with Ellsberg (1961), Samuelson (1963) and Allais (1953).

Not so obvious after all?

Megain’s comment got me thinking: maybe this problem with using a nonlinear utility function for money is not so inherently obvious. Sure, it was obvious to me in 1992 or so when I was teaching decision analysis, but I was a product of my time. Had I taught the course in 1983, maybe the idea wouldn’t have come to me at all.

Let me retrace my thoughts, as best as I can now recall them. What I’d really like is a copy of my lecture notes from 1992 or 1994 or whenever it was that I first used the example, to see how it came up. But I can’t locate these notes right now. As I recall, I taught the first part of my decision analysis class using standard utility theory, first having students solve basic expected-monetary-value optimization problems and then going through the derivation of the utility function given the utility axioms. Then I talked about violations of the axioms and went on from there.

It was a fun course and I taught it several times, at Berkeley and at Columbia. Actually, the first time I taught the subject it was something of an accident. Berkeley had an undergraduate course on Bayesian statistics that David Blackwell had formerly taught. He had retired so they asked me to teach it. But I wasn’t comfortable teaching Bayesian statistics at the undergraduate level—this was before Stan and it seemed to me it would take the students all semester just to get up to speed on the math, with on time to do anything interesting—so I decided to teach decision analysis instead. using the same course number. One particular year I remember—I think it was 1994—when we had a really fun bunch of undergrad stat majors, and a whole bunch of them were in the course. A truly charming bunch of students.

Anyway, when designing the course I read through a bunch of textbooks on decision analysis, and the nonlinear utility function for money always came up as the first step beyond “expected monetary value.” After that came utility of multidimensional assets (the famous example of the value of a washer and a dryer, compared to two washers or two dryers), but the nonlinear utility for money, used sometimes to define risk aversion, came first.

But the authors of many of these books were also aware of the Kahneman, Slovic, and Tversky revolution. There was a ferment, but it still seemed like utility theory was tweakable and that the “heuristics and biases” research merely reflected a difficulty in measuring the relevant subjective probabilities and utilities. It was only a few years later that a book came out with the beautifully on-target title, “The Construction of Preference.”

Anyway, here’s the point. Maybe the problem with utility theory in this context was obvious to Hansson, and to me, and to Yitzhak, because we’d been primed by reading the work by Kahneman, Slovic, Tversky, and others exploring the failures of the utility model in practice. In retrospect, that work too should not have been a surprise—-after all, utility theory was at that time already a half-century old and it had been developed in the behavioristic tradition of psychology, predating the cognitive revolution of the 1950s.

I can’t really say, but it does seem that sometimes the time is ripe for an idea, and maybe this particular idea only seemed so trivial to me because it was already accepted that utility theory had problems modeling preferences. Once you accept the empirical problem, it’s not so hard to imagine there’s a theoretical problem too.

And, make no doubt about it, the problem is both empirical and theoretical. You don’t need any experimental data at all to see the problem here:

Screen Shot 2014-07-21 at 4.16.23 PM

Screen Shot 2014-07-21 at 4.16.35 PM

Also, let me emphasize that the solution to the problem is not to say that people’s preferences are correct and so the utility model is wrong. Rather, in this example I find utility theory to be useful in demonstrating why the sort of everyday risk aversion exhibited by typical students (and survey respondents) does not make financial sense. Utility theory is an excellent normative model here.

Which is why it seems particularly silly to be defining these preferences in terms of a nonlinear utility curve that could never be.

It’s as if you went into a bathroom in a bar and saw a guy pissing on his shoes, and instead of thinking he has some problem with his aim, you suppose he has a positive utility for getting his shoes wet.

60 Comments

  1. Anonymous says:

    It’s been well known for about half a century in finance/gambling circles that if you reinvest your money linear utility leads to gamblers ruin. To avoid that fate it turns out a log utility in the face of uncertainty does the trick. Log utility does curve substantially in a risk averse sense. It’s called the Kelly criterion:

    http://www.amazon.com/Kelly-Capital-Growth-Investment-Criterion/dp/9814383139/ref=sr_1_1?ie=UTF8&qid=1410355972&sr=8-1&keywords=kelly+criterion

    The Kelly criterion (or practical modifications of it) are heavily used, not because that’s the utility the investor personally has, but because they know it largely avoids the gambler’s ruin problem while leading to large increases in wealth if you have an edge.

    It’s not impossible humans have partially internalized this is a heuristic for life lived under uncertainty in some way.

    see here also: http://en.wikipedia.org/wiki/Kelly_criterion

    • Andrew says:

      Anon:

      I know about the Kelly criterion but it does not imply that utility should depart in any noticeable way from linearity in the range [current assets -$100, current assets + $100], for example. The Kelly criterion says that the proportion of your bankroll you should bet should depend in a particular way on your probability of winning, but this all comes from the idea that, when you hit absolute zero, you’re all tapped out. It should not be relevant in any meaningful sense if the question is a choice between, say, $40, and a lottery of $30 or $50.

      That said, you raise an interesting point that it could be an adaptive heuristic to think about gains and losses from a Kelly criterion perspective. One could argue that in so-called caveman times it was not generally possible to accumulate assets, and so all sorts of uncertain events had the possibility of taking a person all the way to zero, hence it could be adaptive to think roughly logarithmically about possessions.

      I don’t think this invalidates the point of my post. After all, the idea of a geocentric universe makes a lot of intuitive sense; it just happens to be wrong. Similarly, the idea of log utility for small changes in dollar values might fit well with how our brain works but I can’t see it being a good normative model for the modern world.

      • Anonymous says:

        How did we go from:
        “Utility theory is an excellent normative model here” and lots of people risking billions at this very moment use log utility for very good and well understood mathematical reasons,

        to:
        it’s not a “good normative model for the modern world”?

        How did we go from:
        “well-known fact that everyday uncertainty aversion can’t be explained by a declining marginal utility of money.”

        to:
        “the idea of log utility for small changes in dollar values might fit well with how our brain works”?

        The problem may state only a single bet, but people don’t make single bets. They make an indefinite series of bets in the face of uncertainty. That’s what life is. So avoiding gamblers ruin type issues is inherent in a persons decision even if the problem doesn’t make it explicit.

        I think this happens a lot with Psychological paradoxes. Some psychologist finds an example where people seem to make irrational or bad decisions, when what’s really happening is the person is making a decent decision under repeated games, because that’s the scenario the participant wants to consider, or are heuristically drawn to, regardless of how the problem choice is presented.

        • Andrew says:

          Anon:

          I think utility theory is an excellent normative model if the utility makes sense. I don’t think it’s an excellent normative model if you accept simple decisions and survey responses as representing true utilities.

          I’m not saying that people are making bad decisions in the $30/$40/$50 problem. Indeed, this is typically presented as a survey question rather than a real decision. I think the bad decision is coming from an analyst who takes an observed pattern of uncertainty aversion and from there draws the conclusion that this particular behavior can be explained by a utility analysis with a sharply curving nonlinear utility function for money.

        • Anonymous: I am not that familiar with the Kelly criterion etc, but let’s assume we can use utility U=log(x) for x = assets/typical_assets_of_an_average_person, in order to play repeated games and get “good” mathematical results.

          I know it’s kind of sad to hear how little liquid assets people actually have in the US (there was a poll a while back that suggested a large fraction of US residents couldn’t afford a $2000 car repair without borrowing money). Still let’s suppose order of magnitude for total assets of a person in the US is $50,000 including liquid and illiquid assets (such as equity in a house or condo, bonds, retirement accounts, sale value of their car, or furniture or whatever)

          Then gambles involving $50 mean a change in x on the order of 50/50000 = 0.001

          log(1.001) = log(1) + .001 to within a tiny margin of error, so the U function is linear over this range. I think this is Andrew’s point. The utility can’t bend over sharply on this scale because then it can’t explain when people go to buy a car and are dealing with quantities of money in the $10k range or buy a house or condo and deal with $100k orders of magnitude.

          I think the issue is that even if people are using log(x) as a utility, they’re redefining the denominator in the dimensionless ratio x in a normatively inappropriate (but still psychologically natural) way.

          So when they’re faced with a certain $100 or 50% chance of $200 and 50% chance of $0 they redefine x to mean Assets_involved_in_the_game / $200 or something like that.

          This seems quite natural to me actually, people are pretty well known to have difficulty with “big” numbers, so to understand a problem they are likely to pick a scale factor that makes the comparisons of reasonable size, a natural scale factor in problems like that is something like “the maximum I might wind up with in the end”. As soon as Andrew moves from a few tens of dollars to a billion dollars, the approximation that the denominator is more or less constant is violated pretty obviously in people’s minds, but they don’t necessarily know what the real issue is.

          • Anonymous says:

            Daniel, blackjack card counters use log utility to size bets and they’re typically around the $5-$100 range.

            They also in practice use fractional Kelly to adjust for imperfections in the probability model.

          • Ian says:

            I’m not sure I understand what the normalization by typical assets is doing here. Isn’t that just adding a constant value to the utility function, which shouldn’t have any effect at all?

            • Ian says:

              Oops, sorry, somehow didn’t read the comment very thoroughly…

              • To explain for other who might be in a similar position, if the normalization scale is CONSTANT, then it doesn’t do anything interesting, but if the choice of normalization scale is a function of the bet, then it changes the utility that you’re using from bet to bet and hence if you expect there to be a “single utility” which consistently explains “risk aversion” you won’t see that. you’ll see paradoxes of the type Andrew mentions, that the “utility” seems to bend over sharply when asked about $100 bets, but couldn’t possibly bend that sharply and still be consistent with say $10,000 bets.

        • Martha says:

          Anonymous said: “I think this happens a lot with Psychological paradoxes.”

          It used to bother me when people talked about paradoxes — such discussions typically seemed to be making a mountain out of a molehill (saying “How can this happen?” when to me it clearly could and did.) Then someone pointed out to me that “paradox” literally means, “counter to orthodoxy” (or sometimes described as “counter to received opinion” or “contrary opinion”). Since then, when someone talks about a “paradox,” I typically think something like, “OK, they’re having trouble because the world isn’t as simple as they thought it was; they need to work it through.”

          • Richard McElreath says:

            Yes. I minored in Ancient Greek, and when translating we were instructed to gloss “paradox” as “strange”.

            I majored in anthropology. I had a fantastic instructor who told us something like: Most academic fields are orthodox. Some try to be heterodox, disagreeing with convention. In anthropology, we aim to be paradox.

            • Daniel Gotthardt says:

              And if you major in Sociology, you learn that some people (TM) use the term “paradox” (in the sense of Epimenides paradox – the story with the Cretan …) as an excuse to ignore contradictions in their own ideas. ;-)

      • Anonymous says:

        At the very least there should be a separation between the idea of liking things, which is what “utility” is often presented as in practice, and actual decision making. The two can be very different.

        An even more egregious example, which fits perfectly with the title of the post, is when economists talk about how we have high unemployment because people have a positive utility for more free time.

  2. Rahul says:

    Andrew:

    Did you ever try editing that Wikipedia article to give a better perspective?

  3. Radford Neal says:

    Certainly the naive explanation of risk aversion for these small bets in terms of non-linear utility of money is wrong. But I think the solution is just to stop being naive, not to conclude that decision theory is incorrect as a normative prescription, or to conclude that people are generally stupid.

    Consider the following issues:

    1) In real small bets, people may not be as concerned with their total financial assets as they are with whether they have enough money in their pocket for bus fare home.

    2) In real small (or large) bets, people are not certain that the other person will pay up if they lose. Larger uncertainty may be associated with larger chance of the other person failing to pay (eg, they perhaps can more easily get away with claiming that the bet wasn’t really what you think it is). People may apply this as a heuristic even in situations where closer examination would show that possible failure to pay isn’t really an issue. For small bets, it may not be worth it to think deeply about the situation.

    3) Bets are social interactions. When the bet is small, the utility of the money lost or gained may be small compared to the difference in utility from different social impressions produced by taking or not taking the bet, and if it is taken, winning it or losing it.

    4) Lots of people have heard of risk aversion by now. When asked about how they would bet (but not given an opportunity to actually do so), people may just give what they think is the “right” answer.

    • Andrew says:

      Radford:

      I agree with everything you say. I think it’s helpful to think of these sorts of motivations, and what upsets me is that, even after decades of this being a well-known problem, economists still seems to go first to the default explanation of there being a sharply curving nonutility function for money, even though that makes no sense.

      Perhaps we can cut and paste your four points into that Wikipedia article and see what happens!

      • Andrew, my discussion above and below about scale factors is one way to see how this works. Perhaps the problem is thinking of “one utility for all decisions” as a good Positive model of decision making (and to throw it out in favor of describing decision making in terms of both a “shape” and a “scale” or in favor of some other more complicated theory), but that doesn’t mean “one utility for all decisions” wouldn’t be a good Normative model.

    • Radford: I agree with you, and above I talk about one way to understand (1) in terms of a scale factor. People don’t necessarily use the same scale factors to evaluate all different situations. In your example, the current pocket money is the scale factor, in other decisions it may be the cash balance in a checking account, in other situations it might be the net assets in one (of several) investment accounts, in other situations it might be the total net worth of the individual (say when making decisions about purchasing a house or buying a franchise business).

      It seems perfectly natural to me that people evaluate different situations according to a different sense of what is “big vs small”. A person who has $1M in total assets including their house, car, investments, cash accounts, retirement accounts, etc can still balk at paying $150 for a fancy dinner and choose instead to get a less fancy dinner for “only” $80 even though the quantities involved are trivial compared to their total assets.

    • Anonymous says:

      A data point in regards to this “small bet” stuff: blackjack card counters use the kelly criteria (log utility) to size bets which typically range form $5-$100.

      • Yes, but do they use the same scale factor when they evaluate whether to place a $100 bet on a blackjack hand, and a $50,000 bet on a business investment?

        I didn’t say that changing the scale factor was wrong, only that it may explain why people do things that would imply large curvature of their utility under certain circumstance, but in other circumstances they clearly don’t have such a radically curved utility.

        In other words, a utility model where U = log(x/C) and C is a constant dollar amount clearly can’t explain Andrew’s issues, but if you think of people making decisions as constructing a utility on the fly U_y = log(x/y) and choosing y so that all the x/y values they will consider in the problem are O(1) or so, then it makes sense why different orders of magnitude of decisions result in outcomes that can’t be explained by a single utility.

        Since that’s more or less what we do in all other measurement areas it seems reasonable to assume we do it for dollars to. Consider:
        Length: feet, inches, meters, centimeters, kilometers, astronomical units…
        Time: year, month, week, hour minute, second, millisecond, microsecond.
        Mass: pound, oz, kg, gram, ton, metric-tonne
        Volume: gallon, cup, pint, liter, ml.

        Also Benford’s law is more or less a consequence of our natural tendency to choose units that describe a problem’s general order of magnitude.

    • Fernando says:

      Radford:

      Exactly. That is also why I don’t believe most of the behavioural experiments. Way too many implicit assumptions to drive the interpretation. The field has empirical feet of clay.

  4. jonathan says:

    I had a hard time grasping simple utility functions because I can’t and would never believe they could be linear. I’m sure we approach this from different directions, but I see the actual monetary or other values assigned in some way by a utility function as symbolic representations of other outcomes, as outputs of these much harder to describe functions. Money, for example, represents things or it isn’t money. It can’t be stripped of its representative meaning. So that $40 may represent dinner, maybe these days 1/2 the cost of going somewhere fairly nice with someone else, or a bottle of wine or genever. The various functions in my life are likely different from those in yours, but I think we’d agree a) we can’t separate ourselves from them and b) we know in general these functions pop in or out as the symbolic stake changes. (I would imagine there must be exceptions, that some people, perhaps a form of psychopath, have a near constant reaction as symbolic stakes change.) If you ask a student to think about the meaning of $20 then $25, then $45 or some other figures, you can’t restrain their minds from associating those values. Same is true if you go by tiny increments: the underlying functions that generate the response shift. The idea these functions could generate a line with predictable, smooth direction has always to me seemed incomprehensible. (And to editorialize, I hate when people speak of this kind of thing as “quantum”, as in it moves in jumps so it’s quantum. Bull.)

    A guy I knew who ran a discount store said, “This stuff won’t sell if I charge too little.” He’d pick up storage bins he could sell at a profit for $2 and found he moved more at $9. There’s money and there’s what’s going in your head. In this case, you could literally have bought the same thing at $2 as at $9 but something in the customer’s head said no.

    In law school, we were presented with a series of cases in which an infant was killed. We had to value which the law should treat the most leniently, which the harshest. We got almost all of them backwards because we were persuaded by presentation of sympathetic circumstances. That was the point of law school in a nutshell: learn how to apply these rules, to determine which points are elements of offense, which are mitigating factors, etc. so you can generate reasonably predictable answers. (The reason for this paragraph is that last line: reasonably predictable answers, given the multitude of human factors involved.)

    I guess at heart I go back to childhood: if utility functions were in fact so predictably linear, why did they have to teach us to share? Why is the value of tomorrow so difficult for so many kids to grasp? Do your homework, do well in school, do better in life … each step lays out but the world is full of proof that this is just what we do. That is, to me, an implication of these simplistic utility models is a form of rationality that present values much better than we actually can, that orders complex variables – way beyond well-ordering – and so on. And that’s assuming the same relative degree of perception, something I realize every time I open a can because my animals run in the hope it’s something for them. They’re highly attuned to a relatively narrow band that affects them and so are we, just more so.

    BTW, I often find your posts talk about things I’m interested in but I have trouble understanding sometimes exactly what you’re getting at because your mental approach is not mine. I just try to provide a perspective.

  5. Radford Neal says:

    Daniel: “A person who has $1M in total assets including their house, car, investments, cash accounts, retirement accounts, etc can still balk at paying $150 for a fancy dinner and choose instead to get a less fancy dinner for “only” $80 even though the quantities involved are trivial compared to their total assets.”

    I don’t think there’s any puzzle about this. If they have 100 fancy dinners a year for the next 10 years, the difference of $70 per dinner comes to $70,000, which is not trivial compared to their $1,000,000 in assets. Billionaires probably aren’t reluctant to go for a fancy dinner every day, and if they are (even though they like such dinners), it’s likely because they don’t want to present that sort of social image, not because they’re using some non-standard version of utility theory.

    More generally, I’m not sure different “scale factors” explain anything. WHY would people use different scale factors? And different scale factors are not needed to explain the bus fare example. It’s pretty simple – lose the bet, and you have to walk home rather than take the bus.

    • Perhaps the dinner example isn’t perfect, in part because we’re going to repeatedly have dinners and each time we’re going to pay out, we never get a “pay in”.

      I still think scale factors are of interest. Mathematically we can’t take the logarithm or other non-power-function of a dollar amount, due to dimensional consistency requirements, so we need to pick a scale factor. Although normatively we maybe should always pick a single scale factor, and this would make the scale factor fairly trivial in the end, there’s no reason to think that people always *do* pick a scale factor, and there’s good reason to think that people don’t based on everyday experience.

      I certainly don’t pick the same scale factor when measuring the amount of time it takes to commute to work, and the amount of time it takes to blink my eye (hours vs milliseconds for example), Why should we think that people naturally choose scale factors when measuring everything else, but not dollars?

  6. Jonathan (another one) says:

    Two things: first, I was always taught that for small bets, people are risk-loving, not risk-averse. The multi-billion casino industry owes its existence to the fact that people are, for small bets, willing to pay to assume risk. Those same people, of course, are risk averse as the stakes get sufficiently high. These concepts are not without confusion as to how exactly to formalize them, but people simply aren’t risk averse at small stakes, unless they’re confused. Second, Rabin’s conclusion is not that utility functions curve enough to make the observed choices trace out expected utility, but that when people exhibit small stakes risk aversion, they’re just making a mistake… They are confused. See particularly Benartzi and Thaler (1999) on this point. http://independent401kadvisors.com/library_articles/RiskAversionorMyopia.pdf “In study 2, we constructed three repeated gambles that are virtually identical from a normative perspective but that differ in terms of the magnitude of single trial losses. Consistent with myopic loss aversion, subjects found the gambles more attractive as the single trial loss was reduced. We also showed that this aversion to short-term losses can be overcome by providing the subjects with the explicit distribution of potential outcomes.”

  7. James Cunningham says:

    Consider, if it helps, that I am a sort of well-meaning imbecile who wishes to correct his misconceptions. Could someone take sufficient pity on me to explain why a logarithmic-shaped utility function for money makes no obvious sense whatsoever? I understand how it leads to paradoxes with respect to the definition of risk aversion — what I don’t get is why it should seem senseless to me full stop.

    We don’t have to be talking about money. We could use strawberries, say. I want more strawberries but given *enough* of them, each further strawberry will offer me less satisfaction than the last. Eventually I suppose my utility function will begin to slope downwards as I drown in strawberries, but that doesn’t seem like a huge problem; we can restrict our attention to the upward-sloping portion. It seems like a logarithmically-curved utility function would describe my attitude toward strawberry consumption rather well.

    Why is it different with money? Or is it?

    • Jonathan (another one) says:

      There’s nothing wrong with logarithmic utility for anything, including money. The problem comes when the choice criterion is the maximization of expected utility under uncertainty. Even with logarithmic utility, you can be vulnerable to a ramped-up version of the St. Petersburg paradox.

      Ther’s nothing wrong with linear utility for money, either, but the same expected utility criterion leaves you vulnerable to the St. Petersburg paradox.

      • Anonymous says:

        But that doesn’t seem to be what Dr. Gelman is saying, exactly. He writes here:

        > Again, this is frustrating because it’s so simple, it’s so obvious that any utility function that curves so much between 0 and 100 can’t keep going forward in any reasonable sense.

        And in his earlier post he wrote:

        > indeed the very concept of a utility function for money becomes, like a rainbow, impossible to see if you try to get too close to it

        Thus my confusion. He seems to be saying that there’s something about logarithmic utility — or perhaps logarithmic utility specifically with respect to money — that’s wrong, even outside of the definition of risk aversion.

        • James Cunningham says:

          That was me. When I wrote that I was an imbecile I meant it: I can’t quite seem to grasp that the name field doesn’t repopulate automatically here …

        • Jonathan (another one) says:

          But the only reason it can’t keep “going forward in any reasonable sense” is if you’re convinced that the maximization of expected utility is a criterion for choice. His counterexample shows (only, IMO) the absurdity of that choice metric, not the absurdity of logarithmic (or any other kind) of utility. it’s certainly possible that I’m only 10% happier with e^11 dollars than I would be with e^10. Indeed, I doubt I’d be even 1 percent happier. But if true, then mere $500 changes in my utility up or down can’t have any risk aversion attached to them at all. But that’s one answer: there is no real risk aversion for small sums. In fact, I think people are risk-loving (on average) for small sums. This means that the utility of money curve may not even be convex in some regions and/or that people maximize something other than expected utility.

      • Corey says:

        Under the Savage axioms for subjective decision theory, utility is bounded. I’m given to understand the proof is somewhere in here

    • Andrew says:

      James:

      As I explain in the linked paper, the logarithmic utility function for money does not make sense in the example given, because a person with that utility function would prefer a fixed gift of $40 to a 50/50 chance of winning a billion dollars. That does not make sense.

      More generally, any mathematical model has its limitations. The point is that this particular model (not just logarithmic, but any utility-for-money model that purports to explain the sorts of everyday uncertainty aversion given in that example) cannot work. This is the point of my paper and of Yitzhak’s.

      • Jonathan (another one) says:

        By the way, Andrew, all of this is in Friedman and Savage (1948) so it really isn’t new. http://stevereads.com/papers_to_read/utility_analysis_of_choices_involving_risk.pdf “It does not, of course, follow that there will exist a utility function that will rationalize in this way [my note: meaning by expected utility maximization] the reactions of individuals to risk.”

        • bxg says:

          I think Friedman and Savage are trying to make the opposite point, and are wrong. After your quoted sentence, they concede that any individual might be inconsistent but that whole paragraphs leads to the point: “Whether a numerical utility function will serve to rationalize any particular class of reactions to risk is an empirical question to be tested; there is no obvious contradiction such as was once thought to exist”. But that doesn’t seem to be true in general. There _is_
          a “particular class of risks”, namely, risk averse decisions against small gambles relative to total wealth, where
          we can pretty much rule [*] out a utility explanation a priori.

          [*] Of course there are escape hatches, such as disbelieving that the decision maker views the actual odds as being those described.

      • Dave says:

        “…the logarithmic utility function for money does not make sense in the example given, because a person with that utility function would prefer a fixed gift of $40 to a 50/50 chance of winning a billion dollars. That does not make sense.”

        That’s only the case if the person has $0 to start. The logarithmic utility function is for total savings/wealth/bankroll. Even if the person starts with $0.01, then the person prefers the 50/50 change of $1B. If the person truly has $0, then the assumption is that the person in question would die of starvation in the immediate future without getting some money. So yes, in that case a sure thing of $40 would be more rational than a 50% of immediate death.

        • Andrew says:

          Dave:

          Indeed, but in the example given, it’s $30/$40/$50 compared to your current state. Not relative to absolute zero.

          For that matter, you wouldn’t die of starvation if you have $0, you could ask your friends for food or you could go to a soup kitchen, etc. But in any case my point is that the decision is relevant to your current assets, it’s always described in that way.

  8. Anonymous says:

    Kelly Criterion is about more than just gambler’s ruin. It’s important because one should consider more than just the expected value, but also the expected outcome.

    For example, suppose you are offered a game to bet 50% of your net worth on a 60% chance to double up, 40% to lose it. This is always a positive EV bet but after playing it 10 times, the expected outcome is: x*(1.5)^6*(.5)*4 = .72x

    Most of the time, you end up losing money.

    Suppose x = 100. A rational person could refuse this game, but this does not imply that u(100) > .6 * u(200) + .4 * u(50)

    Thus I’m not sure that it is valid to write equations such as “U($x) = .55U($(x+10)) + .45U($(x-10))”

  9. Brad Stiritz says:

    Andrew, your humorous title puzzled me a bit. Any number of guys pissing on their shoes may well have made that choice via a “true” sense of positive utility, e.g. drunken foolishness :

    https://www.youtube.com/watch?v=icTZl2BbabY

    You may counter that it’s not appropriate to compare sober vs. drunken utilities. But “sober” behavior isn’t always distinguishable from drunken behavior :

    http://www.distraction.gov/download/research-pdf/comparison-of-cellphone-driver-drunk-driver.pdf

  10. Anonymous says:

    Risk aversion is a property of preferences; it can be defined without any reference to utility models. Risk aversion (over monetary gambles) is defined as: for any possible gamble, an individual prefers having the expected value of that gamble rather than having the gamble itself. No utility function needed.

    • Phil says:

      I think you missed the point. In fact, I know you did.

      Yes, you can define risk aversion without reference to any utility models. But when explaining why people are risk-averse, many economists say that it’s because they have a nonlinear utility function for money. As Andrew is pointing out, that explanation is insufficient. People may indeed have a nonlinear utility function for money, but that does not explain why people make choices the way they do.

      The problem isn’t _defining_ risk aversion. It’s _explaining_ risk aversion (or, rather, uncertainty aversion) in the sorts of cases Andrew is talking about.

      • Andrew says:

        Phil:

        There’s also the meta-point that people have a natural suspicion of my claim that the standard way of doing things in a field (in this case, econ) is wrong, so it’s natural for people, when they see my criticism, to suspect that I am the one who is making the mistake. After all, this is a pretty basic issue. How could Wikipedia and all these textbooks get things wrong? But they do, that’s the interesting thing, indeed that’s why it’s worth posting. I don’t waste my time posting all the zillion things that the textbooks get right!

      • Rahul says:

        And if one asks “Why do people have a nonlinear utility function for money?”?

        i.e. Are utility functions a more basic building block than risk aversion so that we may use one to “explain” the other?
        IOW, if we can take non-linear utility functions as axiomatic or empirical why cannot we do similarly for risk aversion?

      • Anonymous says:

        Nothing I wrote implied any disagreement with Andrew. As the definition should illustrate, assuming that someone is risk averse is ridiculous. Most likely, humans are neither risk averse nor risk loving. Andrew points out that under a particular utility model, risk aversion is equivalent to nonlinear (concave) utility for money. That particular model is even more ridiculous than the definition of risk aversion itself. It is ridiculous even if everyone utility is linear over money (or has some other functional form that may make people happy). Among other things, it requires that all gambles have objective probabilities that everyone would agree upon. If we talk about _explaining_ anything, that assumption clearly makes no sense.

        • bxg says:

          How is the definition of risk aversion ridiculous? It’s just naming a phenomenon. Maybe it’s common, maybe it’s not, maybe people show it in some cases, maybe they don’t. But if you see the phenomenon, here’s a name for what you saw. I can see that it might be uninteresting and unworthy of a name, but how can this be ridiculous?

          Utility is a _model_ and thus is in an entirely different category of concept.

          • Anonymous says:

            The definition says you should see a preference for the expected value over the gamble itself for_all_gambles. What I said is that assuming someone behaves like that (for all gambles) is unrealistic. If one sometimes choose the expected value and other times chooses the gamble, then one is not risk averse.

  11. Observer says:

    Nice. A thoughtful and thought-provoking post. Thanks.

    I was exposed, moderately thoroughly, to Bayesian decision analysis of the Pratt/Raiffa/Schlaifer variety in the early 1970s. I saw it being used for multi-million dollar investment decisions and it seemed pretty reasonable to me.

    It was clear to me then, I remember thinking it through and discussing it with my wife—who was much more thoroughly exposed to the theory, that my utility for $ had to be within epsilon of linear between zero and a few thousand (we were “starving” graduate students at the time). And, as a consequence, I have not purchased an extended service contract or various “insurance” options with payoffs of less than about $10K except (1) when the extended service contract also provided significant transaction cost savings or (2) a Chrysler corporation product was involved. Buying the extended service contract on the Chrysler product may or may not have been a good decision, but it sure had a good outcome—the repairs it covered would have cost five or six times what the contract cost.

    One of the big benefits to me of studying Bayesian decision analysis was to get a clearer understanding of the difference between a good decision and a good outcome.

    One of the costs of that study was using the term “diffuse prior” in a report draft and arguing with my boss who kept editing it to “uniform prior”. Another cost was biting my tongue when a professor told me, in no uncertain terms, that probability of an event is the fraction of occurrences of the event in an infinite series of trials. He was more a physicist than an economist. But, he was later on my oral qualifying exam committee and called me some years later to offer me a visiting professorship at MIT. So, choosing to bite my tongue was probably a good decision.

    Observer

  12. Eric Tassone says:

    Here’s a study I’d like to see (maybe it already exists?): In baseball, young (in terms of service time) players in their pre-free agency years often accept contract offers that seemingly guarantee them financial security but that are for less than what most analysts regard as market value. The typical exchange gives the team cost certainty over any remaining arbitration years, even though the team usually pays more per year than they might have had to had they gone to arbitration, and typically buys out one or more of the player’s free agency years. The player gets the aforementioned financial security and is protected against performance decline, injury, etc.

    There’s been enough of these deals that it seems possible to do a study that, roughly, projects future player performance (maybe in terms of WAR), uses some notion of what teams currently and in the future will pay for in terms of marginal wins per dollar, and then deduces what sort of discount players typically take in terms of trading off potential future upside if they wait until free agency for the certainty of the contract today. Such studies have been done for individual contracts (see, e.g., http://fivethirtyeight.com/datalab/how-bad-was-the-cabrera-deal/), but I am not aware of such a study being done solely for pre-free agency players in an attempt to quantify what, if any, discount they take by signing such deals. If not precisely the same as everything else on this thread, it seems in the same spirit. (If only there were more than 24 hours in the day….)

  13. Luke Collie says:

    Probably another one missing the main point, but here goes. The point at which the argument starts looking odd to me is not the conclusion, but during the set-up, when it is assumed that the probability p required to make U(x) = (1-p)(U(x-10) + pU(X+10) is constant for all x. That is a weird looking utility function. I would expect p to tend to 0.5 as x increases, because the $10 either way becomes irrelevant as part of a large pay-out. The degree of uncertainty aversion should depend on the ratio of the payments, not their difference, so I’ll take $50 as equal to 45% chance of $40 + 55% chance of $60, or $500 against 45% $400 + 55% $600. But if the game is $1000 vs $990/$1010, you can just toss a coin if I can win on heads+ pn edge.

    Setting U($x) = 0.45U(0.8x) + 0.55U(1.2x), which is what the above example implies, is also not a simple function, but it is very close. If you set p to be log(0.8)/(log(0.8)-log(1.2)), which is 0.55034… rather than 0.55 exactly, it resolves to plain old log utility, U($x) = log($x). So now the utility of the billion dollar pay-out is 9. Unfortunately, I can’t answer the question as set this way, since log($0) is -ve infinity! To get sensible answers with a logarithmic utility, some small offset from zero is required. Say $10, so we need $y, such that 0.5log(10)+0.5log ($billion) = log(y). So y=$100,000.

    Is the conclusion that the individual here should be indifferent between $100,000 and a coin toss for $10 vs $1billion reasonable? I think it depends on how rich they are to start with, and so what value you put in for the “small” offset on the log utility – the value below which you are implicitly accepting that the utility function is not log, but flattens out. For someone who is starving poor, such that every cent matters, a $0.10 offset might be realistic, giving y = $10,000. These answers are not exactly right, but they don’t seem obviously crazy, either, like the <$40 conclusion did.

    Still to be dealt with, though, is why people with sizeable net worth still show uncertainty aversion down to small sums. Why do we perceive the game as $50 vs. $40/$60 if we have a $100,000 house, so it 'should' be $100,050 vs $100,040/$100,060, and so show no noticeable uncertainty aversion? Intuitively, it is down to compartmentalised budgeting. Playing games is considered against the personal entertainment budget, which for most people makes $10 noticeable. Dividing our resources up this way, so that most of what we own doesn't exist in our mental model in most situations, makes us avoid risks we ought, rationally to accept, but it is so useful in other ways that most people, and organisations everywhere and as far back as we have any records have done it. The response of the student in the set-up and punch-line exercise is then due to reframing the situation. Once the potential pay-out gets large, the model shifts from 'just playing' to 'this is serious' Utility may be approximately logarithmic in both, but the offset value / scale factor can move up a long way.

  14. […] I’m back into my habit of linking to Andrew Gelman articles every week – this time a great rant about expected utility titled “It’s as if you went into a bathroom in a bar and saw a guy pissing on his shoes, and instead of th…“ […]

  15. TBW says:

    My problem with these examples is that they all seem to ignore how many times the game is played. Expected Value is really irrelevant if you can only play the game once. Let’s say you offer me $10 or 1 chance in a 1,000 of winning $50,000. So, this is presented as a choice between $10 or an expected value of $50. In reality, it’s essentially a choice between $10 and $0, if you can only play the game once. If I get to play the game 10,000 times then yes I would be crazy to take the $10, but if I can play just once, taking the $10 seems pretty logical.

  16. Tom Passin says:

    Something not getting mentioned here is the potential harm that comes with the various outcomes. In studying the risk of project failures, it is standard to consider that the risk is a combination of probability and harm. If there is no harm of an outcome, then there is no risk. So the risk is probably a multiplicative function of some power of the harm. Let’s normalize the risk and harm to a max of 1 (and a minimum of 0). Then for maximum probability and harm, the risk should be maxed out, i.e., 1.

    For a project I worked on, we used R = H * P^0.5, if I recall. As several of the comments have suggested, “harm” might involve social or psychological factors even if the subject won’t actually lose anything tangible, like money or health.

    My point is that considering only probability and numerical dollar amounts one might win is very far removed from most real world situations, where all kinds of harm have to included with the supposed benefits, and often winnings, if any, may occur in the future and therefore be uncertain just by future of that fact.

  17. ezra abrams says:

    old joke from the NYC opera house:
    Sorry ?!
    you piss on my girlfriends shoe and all you can say is sorry !!!
    (this is a gay joke)

  18. ezra abrams says:

    I went thru Thinking Fast and Slow, and tried to read this post, which is a bit above my pay grade
    One question, perhaps overwhelmingly naive: there are oodles of experiments where the experimentees have to choose between two or more options.
    does anyone do the control of making sure that people actually understand the math involved ?

    • Keith O'Rourke says:

      Ezra:

      When I was student I was assigned to analyse the data from a study by Jim Till (under Rob Tibshirani’s supervision under my part-time employment with the statistical consulting service) where utilities were assessed from patients for various disease and social states. My marching orders were to learn about doing linear analysis of utilities that were subject to constraints.

      I was worried about the data more than the modelling and spent a large part of my weekend (unpaid) figuring out how to graph the assessed utilities. The graphs showed the utilities were just not sensible – as my clinical director said afterwards, “That study showed if your patient is socially isolated – break their legs (make them immobile) and they will be much happier!”

      That was the last time I almost learned how to do linear analysis of utilities that were subject to constraints…

Leave a Reply