Skip to content

The fallacy of the excluded rationality

Malcolm Bull writes:

Thanks to the work of behavioural economists there is a lot of experimental evidence to show what many of us would have suspected anyway: that people are not the rational, utility-maximisers of neoclassical economics, but loss-averse sentimentalists who, faced with even the simplest cognitive problem, prefer dodgy short cuts to careful analysis.

No! There’s a fallacy here, and it’s an important one. Bull considers two alternatives:

1. “Rational utility-maximizers”

2. “Loss-averse sentimentalists”

What he’s missing here is an intermediate category, which we might call “Everyday rationalism.” We make rational (dictionary definition: “based on or in accordance with reason or logic”) decisions all the time without breaking out the utilities.

I have no quarrel with utility-based decision analysis—it’s a fine protocol, and we devote an entire chapter of BDA to it—but we should recognize that it’s just one strategy for rational decision making. There are lots of ways to be rational without specifying utility or maximizing anything. For example, you can sit down, list your goals, consider possible ways of attaining them, and then assess potential benefits and drawbacks of each possible action you are considering. Following such a plan does not make you a “loss-averse sentimentalist”!

I’m guessing that, in writing the above, Bull was engaging in some exaggeration, so I’m not saying he holds the view that people must be either “Rational utility-maximizers” or “Loss-averse sentimentalists”—but I do feel that this dichotomy presents problems.

On one hand, we have proponents of formal utility theory who grip that theory like a vise, in the impression that if they let go, there’s nothing but the void of sentiment. On the other, we have opponents of utility theory who are all to ready to abandon rationalism entirely, as here. In the middle is a big gap.

I think both Gigerenzer and Kahneman would agree with me on this one.


    • Jonathan (another me) says:

      Not me. But I went to graduate school back when people took Herb Simon seriously. Throw in satisficing as well. What Andrew is suggesting is much closer to satisficing behavior. There is a serious point here that I have made for a long time — the brilliant mathematical theorems that come from upper hemicontinuous utility functions cannot possibly describe real world behavior, and economists like Simon used to recognize that their virtue was to make solvable what was otherwise a mare’s nest of integer constraints and infinite dimensioned non-convex problems. Havinf solved these problems, one was supposed to then look at actal problem solving and think — “Yep… pretty close” and by this be satisfied that the underlying maximization problem was an ideal substrate for economics, in the same way that frictionless Newtonian physics is a substrate for what happens in the classical world. Economists should be no more surprised by the results of behavioral economics than physicists are when they discover that feathers and bowling balls drop at different rates in the atmosphere. The descent of feathers is indeed a rebuke to Newtonian mechanics if you are stuck thinking that frictionless physics *is* physics. As always, the models you use to model any particular phenomenon needs to be accurate enough to reflect that phenomenon. The dichotomy Andrew addresses here results from missing that somewhat obvious point.

      • Martha (Smith) says:

        “Economists should be no more surprised by the results of behavioral economics than physicists are when they discover that feathers and bowling balls drop at different rates in the atmosphere.”

        Yup. This is why my traditional start to an introductory probability or statistics class with a calculus prerequisite was to start by reminding students of the basic law of gravity (F = ma), then using it to calculate how long it should take for an object dropped from six feet above the floor to drop to the floor. Then I asked for three volunteers: One to drop an object from six feet above the floor, and two with stopwatches to time the descent. Then I pulled the object out from the bag I had under the desk: An inflated balloon. There were of course a lot of groans and laughter at this point, but we proceeded with the experiment, which of course took a longer time than predicted by the formula and also involved horizontal movement, and sometimes upward movement, (especially if the HVAC system was on), and lots of laughs. I hope it also made a lasting impression on them, and helped them see how uncertainty is involved in ordinary things and how modeling needs to take it into account to have anywhere near useful models.

      • Kyle C says:

        I think I would have understood Econ much better as an undergrad if the notional graphs had been called something like “benchmark models” rather than just models.

      • malcolmkass says:

        Well said. The point of the model is to explain the forces inherent in the system, not the specific prediction itself.

    • Richard McElreath says:

      I did my postdoc with Gigerenzer. I am very sympathetic to Simon et al. Channeling that experience, the usual criticism of “loss aversion” is that it is adaptive in a more realistic (?) context, like one in which gains are multiplicative in time, such that losses really do hurt more than gains.

      That perspective gets a lot of traction in animal behavior. Not so much in behavioral econ, I guess?

  1. Russell Almond says:

    Actually, I was just listening to a podcast yesterday which described the Elaboration Likelihood Model (Richard Petty was the guest). The link is here:

    Petty’s work is about persuasion, not really decision making, but it is similar. He describes two routes which people can take: a central route which is about seriously evaluating all of the arguments, and a peripheral route which involves heuristics (in the Kahaman et al. sense) such as instead evaluating who is providing the argument.

    I think that something similar is happening here. If I’m buying a car, I’m going the central route and I’ll do a bunch of research online first. If I’m buying gas for the car, I’m looking at short-cuts like brand loyalty.

  2. LM says:

    ” loss-averse sentimentalists who, faced with even the simplest cognitive problem, prefer dodgy short cuts to careful analysis “

    … well that still sounds like a useful description of US Congressmen & politicians generally

  3. Peter Dorman says:

    The rationality-versus-heuristics framework already assumes that (a) only outcomes of decisions matter and (b) all outcomes are commensurable. We then have a “perfect” utility maximization benchmark and various departures or approximations of it due to information or decision process constraints. But what does rationality look like if these two underlying assumptions are relaxed, as they should be?

    • Steve says:

      +1 How about abandoning a consequentialist framework altogether. What if we go back to Kant? What “being rational” means, at least in certain contexts, is acting in such a way that one’s action conforms to a rule that can universalized. In much human interaction it seems we are trying not to maximize anything, but simply pick a rule that everyone in the group can be happy with. As I walk through the crowded subway, most people pick some walking speed that everyone can conform to that will move the crowd along without causing accidents. Economists tend to see these equilibria and explain them through some type of market mechanism. But, often the simpler explanation is we collectively pick a rule that works for the group as a whole. Wanting the world to be well-order and taking steps to make it so, is at the heart of what most people mean by “rational.” When you get in an argument with your partner, and you think that he or she isn’t be rational, you don’t mean that she isn’t being selfish enough. What you mean by rational is the opposite, she is taking a position that won’t work for the group as a whole. We are all Kantians at heart. It is a shame that the same amount of theoretical and empirical work hasn’t been spent on developing Kantian notions of rationality as have been spent on utilitarian notions.

  4. Anonymous says:

    Here’s an idea taken from Jaynes’s chapter on the “Ap distribution”. One way to explain it is to start from Laplace’s definition of probabilities as the ratio of “favorable cases to all cases”. So that,

    P(a) = (# cases favorable to ‘a’)/(#number of all cases) = fraction_favorable or ‘f’ for short.

    Now consider the more real world instance in which we don’t know with certianty, or can’t count, the number of such cases. Then the above can be generalized to

    P(a) = \int fP(f)df

    where P(f) is an uncertainty distribution which satisfies the same sum/product rules as probabilities (i.e. it’s just a probability in a bayesian sense). Note the same P(a) probability value can correspond to many different P(f)’s.

    One immediate consequence is to confirm the intuition, often expressed on this blog, that overly precise probabilities, for say election forecasts, are absurd. With the above generalization it wouldn’t make sense to quote P(a) to more significant figures than would be allowed by the width of P(f).

    Another consequence is that traditional Utility Maximization corresponds to the case where U(P(a)) and is thus a function of just the first moment of P(f). An immediate generalization would be to let U be a functional of the entire distribution U[P(f)].

    This corresponds to the intuition that not all P(a) =1/2 examples are equal. If we’re in a casino where the game was designed to P(a) =1/2 and f is known precisely (P(f) =delta(f-1/2)), our reaction to this situation is very different than when P(a)=1/2 but P(f) is very spread out, and there’s a great deal of uncertainty about it’s value.

    Letting U be a functional of the entire P(f), and not just it’s first moment, allows us to react differently to both these situations even though P(a) = 1/2 in both cases. This does seem to correspond to what people think.

    This simple generalization almost certainly accounts for a great deal of the criticisms and limitations of traditional Utility maximization. How close it gets to answering all limitations remains to be seen.

  5. “On one hand, we have proponents of formal utility theory who grip that theory like a vise, in the impression that if they let go, there’s nothing but the void of sentiment. On the other, we have opponents of utility theory who are all to ready to abandon rationalism entirely, as here. In the middle is a big gap.”

    In the middle there is the work that is taught in graduate-level courses on behavioral economics in economics departments. Matthew Rabin is one of the most prominent architects of models in that middle, for which he won the John Bates Clark medal (the most highly regarded prize in economics after the Nobel). He particularly advocates one-parameter extensions of existing models, e.g. take the standard utility maximization framework, but assume people have some trouble with Bayesian updating, and see what that implies for markets etc. Other well-known models in that “middle” category are level-k thinking and related models (to which Colin Camerer has contributed a lot), models of social preferences (most prominently the Fehr-Schmidt model) and rational inattention (which would better be called “costly attention”). One main reason is practical: it’s just not practically feasible to incorporate non-standard behavioral assumptions into more complex models (such as financial markets, the macroeconomy, etc.) if you toss out all elements of maximization and rationality; tractability essentially forces you to work on the middle ground Andrew mentions.

    These “middle-ground” models are formal, and hence don’t make it into the pop-science literature. But they describe what current professional behavioral economists (with an econ PhD) do much better than the “loss averse sentimentalist” category. So I would argue that in professional behavioral economics (excluding, perhaps, what some psychologists sometimes call behavioral economics, e.g. “see-whether-it-sticks”-nudge type experiments), the middle ground that Andrew mentioned is the dominant paradigm.

    • Andrew says:


      Yitzhak’s papers are framed a bit too theoretically for my taste, but I respect that this work can have value within economics. Conversely, I realize that many people find my applied and conversational style hard to follow in its own way!

      I can certainly believe that many in the economics profession, theorists and applied researchers alike, can do good work in this middle zone between “Rational utility-maximization” and “Loss-averse sentimentalism.” That’s one reason it’s so frustrating when people seem to not recognize this middle ground (as in the quote in the above post).

    • Phil says:

      Gotta mention that Andrew and I went to high school with Matthew Rabin.

      Also that senior year in high school my brother beat Matthew in a game of tic-tac-toe. For money.

    • Dale Lehman says:

      One of my peeves about economics (and a major reason why I abandoned teaching it) is the virtual complete absence of any of this middle-ground work in the undergraduate curriculum. And, the degree of formalism almost ensures that it may never get there. It makes me wonder what exactly economics students are learning – it ends up being mostly indoctrination, since most of the interesting behavior of people cannot easily be reduced to these more meaningful models. Instead, we offer the frictionless gravity model (utility maximization) with a few anecdotes where behavioral economics presents exceptions – and, depending on your political persuasion, these are either presented as areas where the government can improve things through nudges, or else entertaining exceptions to rational behavior (which markets will sort out on their own).

  6. Sandro Ambuehl says:

    … b.t.w. Matthew Rabin’s homepage shows nicely how even the most prominent models of one of the most prominent behavioral economists receive no press coverage whatsoever:

  7. Michael Nelson says:

    Is rationality primarily an attribute of a decision-making process, or is it an attribute of the decision itself? I think Andrew is talking about rationality as a process, so that any decision made in a rational way is by definition a rational decision. Whereas economists (traditionally) are judging rationality in objective terms, so that we can observe an objectively rational decision and then infer that it is the result of a rational decision-making process. The distinction matters because, in terms of conducting empirical research, it’s much easier to observe decisions than to describe decision-making processes. (Even if you ask, people tend to justify their decisions retroactively, even if they made them completely emotionally–literally, to rationalize them.)

    If you think rationality happens in a person’s head, then you still have to decide where that process begins. If I misperceive the facts of a situation, does that make me irrational? Or ought my rationality be judged by the decision-making process that proceeds from my misperceptions? After all, only an irrational person would invest in what they believe to be a risky scheme, even if their belief is wrong. So I guess you can have irrational perceptions (as when applying fallacies unconsciously), irrational decision-making processes (as when consciously deciding to ignore logic), and/or irrational actions (as when you would have been better off taking a different action). The economist might observe an irrational action, then probe to discover the initial fallacy, and completely ignore the rationality of the action conditioned on the fallacy.

  8. David says:

    Rationality in a sense is conformity. The best decisions are the ones which appear to be correct, not the most logical ones.

    In Finance we would say someone was rational if he predicted the price of a stock correctly. The market price of the stock was driven, however, by all the irrational actors making irrational choices. However logically we make a decision, there are inherent biases and human intuitions that exist behind them. The most ‘logical’ way, in essence, is to predict the collective irrationality of these market participants. In other words, ‘rationality’ boils down to ‘conformity’.

  9. Justin says:

    1. “Rational utility-maximizers”

    2. “Loss-averse sentimentalists”

    is clearly a flawed dichotomy. Dichotomania.


    3. “Everyday rationalism”

    makes it a flawed trichotomy. Trichotomania.



Leave a Reply