Gigerenzer: “On the Supposed Evidence for Libertarian Paternalism”

From 2015. The scourge of all things heuristics and biases writes:

Can the general public learn to deal with risk and uncertainty, or do authorities need to steer people’s choices in the right direction? Libertarian paternalists argue that results from psychological research show that our reasoning is systematically flawed and that we are hardly educable because our cognitive biases resemble stable visual illusions. For that reason, they maintain, authorities who know what is best for us need to step in and steer our behavior with the help of “nudges.” Nudges are nothing new, but justifying them on the basis of a latent irrationality is. In this article, I analyze the scientific evidence presented for such a justification. It suffers from narrow logical norms, that is, a misunderstanding of the nature of rational thinking, and from a confirmation bias, that is, selective reporting of research. These two flaws focus the blame on individuals’ minds rather than on external causes, such as industries that spend billions to nudge people into unhealthy behavior. I conclude that the claim that we are hardly educable lacks evidence and forecloses the true alternative to nudging: teaching people to become risk savvy.

Good stuff here on three levels: (1) social science theories and models; (2) statistical reasoning and scientific evidence; and (3) science and society.

Gigerenzer’s article is interesting in itself and also as a counterpart to the institutionalized hype of the Nudgelords.

35 thoughts on “Gigerenzer: “On the Supposed Evidence for Libertarian Paternalism”

  1. >These two flaws focus the blame on individuals’ minds rather than on external causes, such as industries that spend billions to nudge people into unhealthy behavior. I conclude that the claim that we are hardly educable lacks evidence and forecloses the true alternative to nudging: teaching people to become risk savvy.

    In a former life I worked trying to educate people to make healthier choices. In my experience and observation of various programs, this doesn’t work, except for a few motivated people. >education != better choices, when it comes to things like healthy diet.

    • I agree with the general thrust of this. The issue is complicated — how much do I ascribe to poor decision-making or poor education rather than (for example) other people having different values or goals than I do — but in practice people do not seem to learn certain things very well. I would not claim they’re “ineducable” but I do think it’s hard to educate people in some areas related to risk.

      • I’ll explain what I meant. Disclaimer – this is purely anecdotal from experience working in this field. When I worked in the ‘health and wellness’ arena (at least 10-15years ago; both in research and ‘hands on’), there seemed to be a large push that educating people about the risks of certain lifestyles would somehow dissuade them from ‘bad’ choices. For example, surely if people knew and understood the risks of obesity and how to make better dietary choices, then they would do so. This seemed to be a particularly popular view among academics, because I guess it made sense to them. However, I didn’t observe this to be the case. People make bad choices despite the education, and all the knowledge in the world doesn’t seem to dissuade them from doing so. (now of course, there would be certain highly motivated individuals who would indeed make changes, but I observed that these people likely would have done so anyway).

        Based on this experience, I certainly wouldn’t argue that people are “ineducable”, but rather that educating them doesn’t equate to changed action.

        This likely varies greatly depending on the subject and population, though. For example, maybe educating people in impoverished communities in a third world country about clean drinking water and providing a filtration device would change behavior. I don’t know. I can only speak from the experience I had.

        The phrase “the true alternative to nudging: teaching people to become risk savvy” just sounded so pompous and academic, it made me laugh. Maybe this works, but count me as a skeptic without knowing any more. Maybe education would help change my view haha

        • I honestly have no idea what you’re getting at. You can’t possibly really be denying that people change their behavior in response to new information. People in countries without high quality drinking water boil their water before drinking it. That’s not a natural instinct–it’s obviously a behavior they were taught. In the 1950s everybody in the US smoked. Today much fewer do, and many of the people who do are actively trying to quit.

          Some things are difficult to implement, even if you know the right choice. That doesn’t meant that people can’t be educated in general, or that even in those circumstances that education doesn’t help on the margin.

          now of course, there would be certain highly motivated individuals who would indeed make changes, but I observed that these people likely would have done so anyway)

          I severely doubt that those people would have made healthier diet choices in the absence of any education–rather, they would have found their own education since (sometimes spurious) dietary information is all over the internet. That doesn’t mean they don’t need education, it means they don’t need *your* education, which doesn’t exactly contradict Gigerenzer’s point.

          To Gigerenzer’s point, a couple of decades ago, those people would have found that they should primarily avoid fats, avoid eggs because of their “high cholesterol content”, and every day drink a glass of milk and eat 11 servings of bread and grains, a USDA recommendation designed by the dialectic between well meaning public servants and industrial farming lobbyists. Given better information, lots of people are still unhealthy, but there’s plenty of blame to go around that doesn’t all fall on people’s individual failures of reasoning.

        • You have to distinguish between decision making and changes in behaviour. Gigerenzer is making the point that the evidence shows that statistical skills can be taught. But, he also makes the point that the nudgelords are pushing a highly individualistic narrative. I think he would agree that education alone won’t solve all these problems when institutions, big business push very unhealthy choices.

        • I think this commentary misses an important point, one that was mentioned but given short shrift in the Gigerenzer article as well.

          Educating people about healthy eating is not being done in a vacuum. It is being done in the midst of a tsunami of disinformation from the agricultural industry, generally known as marketing. In their ordinary daily activities people are exposed to hundreds of messages promoting the ingestion of all sorts of highly-palatable edible substances [h.t. Michael Pollan] that have a vague resemblance to food and are marginally biodegradable. On top of that, in their daily activities people are seldom more than a few yards away from places where they can purchase those items for immediate consumption. Workplaces and even schools are bristling with snack vending machines. Whereas once you had to go to a food store, today you can purchase them in almost any commercial venue that is open to the public.

          Is there any reason to wonder that brief educational interventions undertaken in infrequently-occuring settings and circumstances have relatively little effect in face of the massive marketing efforts of the food industry which bombards people with counter-educational messages and ubiquitous in-your-face nudges to bad choices throughout the waking day? Really, it would be miraculous if education could be effective in this environment.

          The obesity/diabetes epidemic is not an accident. It is a direct consequence of a social choice to permit a highly aggressive industry to massively enrich itself by exploiting aspects of human physiology that evolved to adapt to chronic food scarcity and leave people to deal with the poisonous consequences on their own. Welcome to contemporary western civilization.

        • The obesity/diabetes epidemic is not an accident. It is a direct consequence of a social choice to permit a highly aggressive industry to massively enrich itself by exploiting aspects of human physiology that evolved to adapt to chronic food scarcity and leave people to deal with the poisonous consequences on their own. Welcome to contemporary western civilization.

          Low fat (high carb) diets make you eat more than you need to. Of course, if you try to cut down on carbs that means you won’t be eating 95% of what gets sold as food since sugar or corn syrup gets added to everything.

        • Smoking and clean water are just bad examples. When you think of “risk savviness,” the best example is gambling. Lots of people try to educate people that casino gambling and sports gambling are bad bets. Classical economists *assume* that gambling patrons understand this, and ascribe their behavior either to risk-loving behavior in the relevant utility space or to some sort of utility from gambling, being regarded as a big shot, getting “free” comps and the like sufficient to overcome the negative expected value.

          But has anyone ever been nudged into betting “with their head not over it?” What sort of information inculcation pattern works? And of course, as mentioned, the marketing forces of the gaming industry work immenseley in the other direction.

        • Jonathan:

          Several years ago in a university library I came across a charming book by Maxim (of gun fame) where he went through chapter after chapter demolishing the Martingale system. (For those who don’t know, the Martingale system is to bet $1, then if you lose, bet $2, then if you lose, bet $4, etc. You’re then guaranteed to win exactly $1—or lose your entire fortune. A sort of lottery in reverse, but an eternally popular “system.”)

          I guess that Maxim was frustrated because lots of people kept following that stupid system. Not because of their utility function or whatever, just because of some mixture of hope and confusion.

  2. Apparently only researchers are subject to irrational biases, namely, confirmation bias. Yet Gigerenzer also selects examples that fit his argument and ignores those that do not (such as default biases that have nothing to do with with recommendation). He engages in straw-man reasoning. (H&B researchers think that biases are like visual illusions. Someone may have said this once, but it is not a commonly held view.) The evidence he selects (e.g., on the conjunction fallacy) has been, in my view, demolished, yet he persists in ignoring critics. In sum, I don’t see why you give him a free pass when you correctly call out others for self-serving biases and lack of integrity.

    • Jon:

      You write, “H&B researchers think that biases are like visual illusions. Someone may have said this once, but it is not a commonly held view.” That may be, but one of the most prominent publications from heuristics and biases researchers is Daniel Kahneman’s “Thinking Fast and Slow,” which indeed uses a visual illusion (the Müller-Lyer illusion) as a example of Systems 1 and 2, and then follows up with, “Not all illusions are visual. There are illusions of thought, which we call cognitive illusions.” A quick google turned up this from a 2011 New York Times article by Kahneman: “I was reminded of visual illusions, which remain compelling even when you know that what you see is false. I was so struck by the analogy that I coined a term for our experience: the illusion of validity.”

      I’m not saying this as some sort of gotcha. The analogy to visual illusions makes a lot of sense to me. This may represent my ignorance, but in any case I think it’s a commonly-held view, not just a view held by Kahneman and me.

      Regarding your final question: I don’t want to give anybody a “free pass.” If you can point to particular articles that shoot down some of Gigerenzer’s claims, I’d be happy to post about them.

      • Let’s not forget Massimo Piatelli-Palmarin’s excellent book “Inevitable Illusions”, which was my introduction into the concept of cognitive illusions. It has been a long time since I read it but think MPP makes an explicit analogy between optical illusions and cognitive illusions.

        Every thing is what it is, and not some other thing. A cognitive illusion is not a visual illusion. But I think if you are trying to explain what a cognitive illusion is, to someone who doesn’t know the term, and you say something like “it’s a bit like a visual illusion: your mind unconsciously ‘sees’ something whether it’s there or not”, that’s fine. People just have to understand the limits of the analogy.

        To give one of a zillion examples, let’s revisit the Hot Hand hypothesis. Most of the Hot Hand news in recent years has been related to the fact that hoi polloi had it right after all: due to a somewhat subtle bias in the way the Hot Hand had been studied, researchers had vastly underestimated the Hot Hand effect. (Readers who don’t know what I’m talking about can search for “hot hand” on this blog). But one part of the original Hot Hand research remains true: people see patterns in randomness. Andrew, I know you use this in a classroom exercise sometimes: have one team of students generate a list of actual coin flip outcomes, have another group try to simulate one by making up a list without actually flipping the coin, and you can tell the difference in a few moments. People who make up 100 coin flips will (generally) not have, say, 6 consecutive heads or tails at any point in the sequence, but that can happen quite easily in reality. Or, to put it another way, when people see 6 consecutive heads they automatically think the sequence is not random. It’s a gut reaction. I think it’s fair to say that’s analogous to a visual illusion..as long as you don’t take the analogy too far.

        As for Gigerenzer’s piece, I think it’s fine as an opinion piece — he can have whatever opinion he wants, and it’s nice of him to explain why he feels the way he does — but some bits made me wince. For instance, “In this account, the enemy is within us, embodied in the very nature of our thinking. As Thaler and Sunstein (2008) wittily asserted, humans are not even remotely like Homo economicus, but more like Homer Simpson. That message has become extremely popular, precisely because it is directed against neoclassical economists and other libertarians.” This was all fine until that final clause, which seems kinda zany to me. I think the belief that people don’t act like Homo economicus has become extremely popular — if indeed it has — is mostly due to the fact that it is obviously true. Sure, I’m always happy to stick a thumb in the eye of the neoclassical economists, but Gigerenzer has cause and effect backwards: the reason I find neoclassical economists mock-worthy is that I think their view of the world is far from reality. Gigerenzer seems to think it’s the other way around: I’ve decided, for whatever reason, that neoclassical economists are mockable, therefore when I hear something that contradicts them I tend to believe it. I suppose there are people like that — I think S.. R.. was a bit like that when it came to psychology — but I think most people who have considered the issue believe that people are often irrational (in the sense of neoclassical economics) because we observe that to be the case, even in ourselves.

        • Look at the quote again. The issue is not that people are rejecting the view that human don’t reason as homo economicus. The issue is that the accept that humans are like Homer Simpson. He is saying that people are accepting the later view because it is opposed to the neoclassical econ. Or another way of stating it, is that the opponents of neoclassical econ accept a false dichotomy. We are either rational as described by decision theory or we’re irrational. I agree with you we aren’t rational if decision theory is the normative theory of rationality. But, that doesn’t make us irrational because decision theory isn’t the correct normative theory.

        • Steve, I don’t think people believe what they believe “precisely because it is directed against neoclassical economists and other libertarians.”

          What about you? You seem to agree that people aren’t rational by the definition of neoclassical economics. Did YOU choose that view “precisely because it is directed against neoclassical economists”? No. I didn’t either.

  3. I am puzzled by our lead in sentence, referring to Gigerenzer as “The scourge of all things heuristics and biases.” As I understand the usage of that term, you would be saying he is opposed to those things. Indeed, he often states opposition to purported biases, but he certainly is not opposed to heuristics. I’ve always found his positions insightful, but often unnecessarily overstated as an opposition to biases. It’s almost like the purpose of his argument is to counteract Kahneman.

    Heuristics can enhance or mitigate biases. I accept most of Kahneman’s analysis of biases in decision making under uncertainty. I also think Gigerenzer has valuable insights into how heuristics can be used to improve such decision making. But many of our heuristics for uncertain decisions form the basis for our biases. So, heuristics (to me) are a neutral term: they can be used to counteract or embody biases. But, for the most part, biases seem bad to me (sub-optimal). Your lead in to the post appears to conflate the 2: is there something I am missing in the association you are making?

      • Hi Andrew,

        If a student came to you with the vague goal of wanting to understand heuristics and biases, would you steer them more toward the H&B literature or the scourge literature (“adaptive toolbox”)?

        Better yet, if you could recommend one book to this student, what would it be?

        (I know this isn’t your area, but I am genuinely curious about your impression of the pissing contest)

        • Jordan:

          I teach this stuff in my Communicating Data and Statistics course. In that course, I assign the classic book by Kahenman, Slovic, and Tversky and also some articles by Gigerenzer and his collaborators. I don’t assign the books of Thaler, Ariely, etc., as I find the original research articles to be more interesting and, indeed, more readable than books that summarize and hype the subfield.

  4. You know, beer and fried chicken are really bad for me. If I gave them both up, I’d probably live a longer life overall. But consuming them is undeniably enjoyable.

    I doh’t think eating unhealthy foods is irrational, exactly, it’s just a lot of positive reinforcement. People aren’t necessarily dumb, they’re often just trying to eke out the most pleasure they can find in the short time they have.

    Even if I’m just acting rationally, is giving up alcohol and all fried foods for a lifetime worth an extra couple of years of life, on average? Hard to say!

    • Sean:

      I think the idea is we can eat beer and fried chicken but in smaller servings. I agree that smaller servings give less enjoyment but perhaps there’s some nonlinearity there that can motivate smaller portions. I don’t really know, though; we’d need someone like Dr. Wansink to figure out the details here.

      • Full disclosure: I’m currently on vacation in the Dominican at an all inclusive resort, so any impassioned defense of beer and wings from me at the moment is somewhat suspect.

        If only I had someone to nudge me back on the right path! Maybe they could redesign the buffet to put all the healthy food first or something.

    • Sean, yeah, I agree with you. This is why I gave supported the general thrust of jd’s comment (above) but didn’t endorse the specifics.

      Plenty of people choose to do things that are bad for their health, and that is not necessarily irrational. Not at all. If you perceive the positives to be greater than the negatives then it’s a perfectly reasonable and rational thing to do.

      But many people do behave in ways that are either outright irrational in the sense that neoclassical economists talk about rationality, or that require some hoop-jumping in order to make them seem rational. They buy gym memberships and don’t go to the gym, they eat a whole container of ice cream at a sitting even though they know (correctly) that they’ll regret it later, and so on. I know someone who used to keep tens of thousands of dollars in a non-interest-rate-bearing checking account, year after year, when simply transferring some of it into savings would have earned a wee bit of interest, and using half of it (or whatever) to buy a CD would have earned enough to make it worth the ten minutes of effort it would have taken…as he himself said, but still didn’t do!

      A person isn’t really a single entity with a single utility function. We can perhaps be better pictured as a collection of different entities with different utility functions, and each entity gets to take charge occasionally. Id, Ego, Superego, OK, sure, but I think maybe Calvin and Hobbes is more like it: https://web.mit.edu/manoli/mood/www/calvin-full.html
      The “I” who bought the gym membership is not the same “I” who would have to go to the gym; the “I” who enjoys eating that tub of delicious ice cream is not the same “I” who regrets doing so after it’s done (for one thing, that former I was hungry, whereas the latter I is over-stuffed).

      So I’m with you on the specific example that it can be rational to eat foods that you know are bad for your health. But I still think — well, I know — that eating choices contain some measure of rationality, at least when viewed from a classical perspective.

      • Yeah, I can definitely relate as a person who once bought a years worth of gym subscription to see if it’d force me to go to the gym. Only worked for a month or two before I quit, mostly because I hated going there because the physical discomfort was too much. In contrast, I’ve kept up martial arts for years because it’s more fun and social, even though it’s just as physically taxing.

        Procrastination is a really fascinating phenomenon to me, really. It’s irrational in the sense that it usually makes things worse in the long run, but in the present moment it seems reasonable (avoid doing the thing that is aversive).

        I guess a lot of the nudges are trying to find ways to make people do actions that are unpleasant or otherwise aversive in the present to reap a benefit in the future. But a lot of behaviours are only irrational from a future-focus, and in the present they are unpleasant (or alternatively, pleasurable in the moment but harmful in the long term).

      • “But many people do behave in ways that are either outright irrational in the sense that neoclassical economists talk about rationality, or that require some hoop-jumping in order to make them seem rational.”

        This doesn’t reject the assumption of rationality at all! AFAIK, when “neoclassical economists talk about rationality”, they refer to population averages. It’s assumed that all people will exhibit behaviors across the spectrum of apparent rationality, but on average people behave rationally. You’re arguing against a strawman. There was never an assumption that most behavior by most people would be clearly rational.

        There’s just no question that rationality is a valid assumption. When you put Nikes on sale, people buy more of them. When the price of gas goes up, people drive less. When its raining, people stay inside. When the home team is winning, more people go to the ballpark.

        People are, in general, rational. That’s why you have a computer to comment on blogs.

        • Anon:

          There’s a lot out there in the economics literature. I agree that there’s literature on aggregate behavior that is consistent with rationality, for example the price elasticity of demand, as you say. There’s also a literature on individual rationality, for example on rational addiction and individual utility functions. Rationality can be defined in various ways. When Phil says that “But many people do behave in ways that are either outright irrational in the sense that neoclassical economists talk about rationality, or that require some hoop-jumping in order to make them seem rational,” he’s no arguing against a straw-man, at least not if you insert the word “some” before “neoclassical” in his sentence.

        • “at least not if you insert the word “some” before “neoclassical” in his sentence.”

          That’s a pretty big insertion. I’ll wager that that “some” is a pretty small share, given that individuals obviously frequently engage in irrational behavior.

          I came back to say, though, that the fact that people don’t generally respond to scientific warnings is actually pretty rational. Aside from the fact that warnings expressed as a low probability of an outcome have almost no meaning at the individual level, so many scientific warnings have come to naught or even been famously reversed that it’s reasonable to hold them all in very strong suspicion under any circumstances. That’s not to mention the caveat-ridden jargon that should well suggest to everyone that many of these claims have no basis in fact.

          Here’s a funny one I just looked up:

          I googled: “probability of getting cancer from eating red meat”

          “However, if the association of red meat and colorectal cancer were proven to be causal, data from the same studies suggest that the risk of colorectal cancer could increase by 17% for every 100 gram portion of red meat eaten daily”

          This statement is meaningless. No rational person *should* change their behavior based on this statement.
          The “if” and “could” almost certainly negate the whole effect. But going further only makes it worse. 100g is only 3.6oz. Since the average American consumes 4.8oz of red meat daily, wouldn’t this imply a high risk of colorectal cancer? Shouldn’t everyone know someone who contracted it?

          The rational response to this statement is to conclude no one knows anything about the effects of red meat on colorectal cancer. Since inane statements like this are routine, it’s rational to generalize that scientists are usually full of crap, at least when it comes to nutritional studies. And if you dig into the science of nutrition, you’ll only confirm this conclusion.

        • Anon:

          I disagree with your reasoning regarding colorectal cancer. You write, “Since the average American consumes 4.8oz of red meat daily, wouldn’t this imply a high risk of colorectal cancer?” The answer to your question is No, the relative increase in risk is relative to some baseline. A high relative risk does not imply that the absolute level of the disease has to be high.

          Also, unfortunately or not, the literature on individual rationality in economics is huge. I disagree with your implication that this is a minor thing in economics.

        • “Since the average American consumes 4.8oz of red meat daily, wouldn’t this imply a high risk of colorectal cancer? Shouldn’t everyone know someone who contracted it?”

          Anon, I guess it depends on the type of people you know, how many people you know, your age, and survivorship bias. For perspective, the cumulative risk of CRC in an “average risk” population, by age 75, is about 3-4% (at age 60 it’s about 1%, and at age 55 it’s about 0.5%). This aggregate consists of heavy red meat-eaters, moderate red meat-eaters, and people that don’t eat red meat. If the aggregate largely consists of meat-eaters (or heavy meat eaters), then applying the relative risk for meat eaters isn’t gonna tell us much–it’s baked in to the aggregate already. Applying a relative risk reduction for reducing red meat consumption in the population could tell us something about reducing aggregate cancer rates.

  5. Phil writes, “You seem to agree that people aren’t rational by the definition of neoclassical economics. Did YOU choose that view “precisely because it is directed against neoclassical economists”? No. I didn’t either.”

    That is not what Gigerenzer is saying. He is saying that people believe that humans reason like Homer Simpson is caused by their rejection of neoclassical economists. Like I said, a better way of putting it is that people see the silliness of the neoclassical econ view, but accept its conception of rationality, and thus take the false choice of thinking that humans are dumb like Homer Simpson. The reference to Homer Simpson is not a reference to someone who simply fails to reason using game theory. Homer is dumb. So, yes people are choicing to believe that normal folk have totally error ridden reasoning because they have bought into that false dicotomy. I think the problem is you may also accept this false dicotomy, so that you cannot see that Gigerenzer is saying “Not rational as described by game theory does not equal not rational.”

  6. What the hell is “libertarian paternalism”…? Libertarians writ large are very skeptical of nudges (since many of them are built on hilariously terrible social psychology research), etc. Do we really think the Reason crowd (or, gasp, The Hoover Institution *dun dun dunn*) would ever advocate this nonsense?

    That’s rhetorical, because “no”.

  7. Even if Gigerenzer we’re right (he’s not), nudging would still be useful given limited cognitive resources in a complex world that often makes unhealthy and unwise decisions much easier than decisions that most people would say they’d prefer.

    Gigerenzer claims that heuristics are adaptive, but adaptive in what sense? If you’re being prepped for surgery, I’m betting you’d rather the doctors use a checklist than rely on heuristics.

    • “Gigerenzer claims that heuristics are adaptive, but adaptive in what sense?”

      Maybe if you read his papers you’d know what he means by “adaptive”

Leave a Reply

Your email address will not be published. Required fields are marked *