Gigerenzer: “The Bias Bias in Behavioral Economics,” including discussion of political implications

Gerd Gigerenzer writes:

Behavioral economics began with the intention of eliminating the psychological blind spot in rational choice theory and ended up portraying psychology as the study of irrationality. In its portrayal, people have systematic cognitive biases that are not only as persistent as visual illusions but also costly in real life—meaning that governmental paternalism is called upon to steer people with the help of “nudges.” These biases have since attained the status of truisms. In contrast, I show that such a view of human nature is tainted by a “bias bias,” the tendency to spot biases even when there are none. This may occur by failing to notice when small sample statistics differ from large sample statistics, mistaking people’s random error for systematic error, or confusing intelligent inferences with logical errors. Unknown to most economists, much of psychological research reveals a different portrayal, where people appear to have largely fine-tuned intuitions about chance, frequency, and framing. A systematic review of the literature shows little evidence that the alleged biases are potentially costly in terms of less health, wealth, or happiness. Getting rid of the bias bias is a precondition for psychology to play a positive role in economics.

Like others, Gigerenzer draws the connection to visual illusions, but with a twist:

By way of suggestion, articles and books introduce biases together with images of visual illusions, implying that biases (often called “cognitive illusions”) are equally stable and inevitable. If our cognitive system makes such big blunders like our visual system, what can you expect from everyday and business decisions? Yet this analogy is misleading, and in two respects.

First, visual illusions are not a sign of irrationality, but a byproduct of an intelligent brain that makes “unconscious inferences”—a term coined by Hermann von Helmholtz—from two-dimensional retinal images to a three-dimensional world. . . .

Second, the analogy with visual illusions suggests that people cannot learn, specifically that education in statistical reasoning is of little efficacy (Bond, 2009). This is incorrect . . .

It’s an interesting paper. Gigerenzer goes through a series of classic examples of cognitive errors, including the use of base rates in conditional probability, perceptions of patterns in short sequences, the hot hand, bias in estimates of risks, systematic errors in almanac questions, the Lake Wobegon effect, and framing effects.

I’m a sucker for this sort of thing. It might be that at some points Gigerenzer is overstating his case, but he makes a lot of good points.

Some big themes

In his article, Gigerenzer raises three other issues that I’ve been thinking about a lot lately:

1. Overcertainty in the reception and presentation of scientific results.

2. Claims that people are stupid.

3. The political implications of claims that people are stupid.

Overcertainty and the problem of trust

Gigerenzer writes:

The irrationality argument exists in many versions (e.g. Conley, 2013; Kahneman, 2011). Not only has it come to define behavioral economics but it also has defined how most economists view psychology: Psychology is about biases, and psychology has nothing to say about reasonable behavior.

Few economists appear to be aware that the bias message is not representative of psychology or cognitive science in general. For instance, loss aversion is often presented as a truism; in contrast, a review of the literature concluded that the “evidence does not support that losses, on balance, tend to be any more impactful than gains” (Gal and Rucker, 2018). Research outside the heuristics-and-biases program that does not confirm this message—including most of the psychological research described in this article—is rarely cited in the behavioral economics literature (Gigerenzer, 2015).

(We discussed Gal and Rucker (2018) here.)

More generally, this makes me think of the problem of trust that Kaiser Fung and I noted in the Freakonomics franchise. There’s so much published research out there, indeed so much publicized research, that it’s hard to know where to start, so a natural strategy for sifting through and understanding it all is using networks of trust. You trust your friends and colleagues, they trust their friends and colleagues, and so on. But you can see you this can lead to economists getting a distorted view of the content of psychology and cognitive science.

Claims that people are stupid

The best of the heuristics and biases research is fascinating, important stuff that has changed my life and gives us, ultimately, a deeper respect for ourselves as reasoning beings. But, as Gigerenzer points out, this same research is often misinterpreted as suggesting that people are easily-manipulable (or easily-nudged) fools, and this fits in with lots of junk science claims of the same sort: pizzagate-style claims that the amount you eat can be manipulated by the size of your dining tray, goofy poli-sci claims that a woman’s vote depends on the time of the month, air rage, himmicanes, shark attacks, ages-ending-in-9, and all the rest. This is an attitude which I can understand might be popular among certain marketers, political consultants, and editors of the Proceedings of the National Academy of Sciences, but I don’t buy it, partly because of zillions of errors in the published studies in question and also because of the piranha principle. Again, what’s important here is not just the claim that people make mistakes, but that they can be consistently manipulated using what would seem to be irrelevant stimuli.

Political implications

As usual, let me emphasize that if these claims were true—if it were really possible to massively and predictably change people’s attitudes on immigration by flashing a subliminal smiley face on a computer screen—then we’d want to know it.

If the claims don’t pan out, then they’re not so interesting, except inasmuch as: (a) it’s interesting that smart people believed these things, and (b) we care if resources are thrown at these ideas. For (b), I’m not just talking about NSF funds etc., I’m also talking about policy money (remember, pizzagate dude got appointed to a U.S. government position at one point to implement his ideas) and just a general approach toward policymaking, things like nudging without persuasion, nudges that violate the Golden Rule, and of course nudges that don’t work.

There’s also a way in which a focus on individual irrationality can be used to discredit or shift blame onto the public. For example, Gigerenzer writes:

Nicotine addiction and obesity have been attributed to people’s myopia and probability-blindness, not to the actions of the food and tobacco industry. Similarly, an article by the Deutsche Bank Research “Homo economicus – or more like Homer Simpson?” attributed the financial crisis to a list of 17 cognitive biases rather than the reckless practices and excessive fragility of banks and the financial system (Schneider, 2010).

Indeed, social scientists used to talk about the purported irrationality of voting (for our counter-argument, see here). If voters are irrational, then we shouldn’t take their votes seriously.

I prefer Gigerenzer’s framing:

The alternative to paternalism is to invest in citizens so that they can reach their own goals rather than be herded like sheep.

131 thoughts on “Gigerenzer: “The Bias Bias in Behavioral Economics,” including discussion of political implications

  1. “This is an attitude which I can understand might be popular among certain marketers, political consultants, and editors of the Proceedings of the National Academy of Sciences, but I don’t buy it, partly because of zillions of errors in the published studies in question and also because of the piranha principle. Again, what’s important here is not just the claim that people make mistakes, but that they can be consistently manipulated using what would seem to be irrelevant stimuli.”

    I have lately started to wonder whether all this type of social science research, and subsequent media attention, can perhaps better be seen as a way to influence policy, and try and change things.

    Perhaps it doesn’t matter if it is actually the case that people can be manipulated by small interventions or stimuli, all that matters is that you at one point in time can use this “knowledge” to in turn do all kinds of (irreversible) stuff.

    If after X years the “findings” weren’t so robust after all, nothing is lost IF things have already been implemented and changed on the basis of the prior “findings” and can’t easily be turned back to what it was.

    • “I have lately started to wonder whether all this type of social science research, and subsequent media attention, can perhaps better be seen as a way to influence policy, and try and change things.”

      I have also lately started to wonder whether this type of social science research, and subsequent media attention, can also be used to change things in science itself. You could easily, in my opinion and reasoning, pick and choose certain things to “investigate” while basically almost certainly know the kind of “findings” they will produce. You can then use this “meta scientific knowledge” to change all kinds of stuff in scienctic practice. For instance:

      # you could “replicate” only a very particular type of social science research of which you almost certainly know it will not
      replicate
      # you could use this “meta scientific knowledge” to in turn hammer on about “replicability” and how things should really be changed in a certain way
      # you could use a friendly editor to implement your “proposal” and in turn “investigate” it and find all kinds of positive things concerning your proposal
      # you could do this with several other issues related to science, and how science is done, to try and “paint the picture” you want to showcase
      # then you can combine it all to propose and implement things that maximize the chances of getting the type of science, and scientific results, you want.

      Although i reason this process can reflect a “wise” and “valid” way to go about things (or closely resembles it), i reason the chance of possible manipulation is very high. Especially when things get implemented way too soon, for instance on the basis of a single (flawed?) study. I reason “meta scientific” research can be manipulated, and abused, and be flawed, etc. just like “normal” research. Also possibly see: https://blogs.plos.org/absolutely-maybe/2017/08/29/bias-in-open-science-advocacy-the-case-of-article-badges-for-data-sharing/

  2. Quote from the blogpost: “In contrast, I show that such a view of human nature is tainted by a “bias bias,” the tendency to spot biases even when there are none. ”

    If i am not mistaken i used the term “bias bias” on this blog at least 2 times to refer to instances where people shout “bias” to counter some argument someone makes, or something like that, in such a way that people stop really thinking about, or discussing, things.

    Perhaps Gigerenzer’s coining of the term “bias bias” and definition is (at least somewhat) in line with mine.

      • “The fact that you don’t get credit is probably due to bias bias bias.”

        Ha!

        I have found out that credit is given to the wrong people way too much in science/academia for my liking. It’s one of the things is dislike about science/academia the most.

        Anyway, perhaps i (unconsciously) used the term and definition because i read Gigerenzer (i don’t think i read his paper though), or came across the term somewhere. It looks like Gigerenzer’s paper was published in december 2018. I tried to search this forum for “bias bias” and see if i could find when i wrote it and see if it was before december 2018, but i couldn’t find anything in my short search.

        I personally don’t care about “credit” because i don’t want a “career” in science/academia anymore, but i DO think it’s very important to give credit to the correct people in science/academia in general. I think this might also be very important in science/academia because i reason the chances are higher that that person will also have other good ideas/thoughts/etc. In my reasoning, and view of science, this could all be an important part of making sure the “best” and “brightest” people are working in science/academia.

        • Quote from above: Anyway, perhaps i (unconsciously) used the term and definition because i read Gigerenzer (i don’t think i read his paper though), or came across the term somewhere. It looks like Gigerenzer’s paper was published in december 2018. I tried to search this forum for “bias bias” and see if i could find when i wrote it and see if it was before december 2018, but i couldn’t find anything in my short search.”

          Ha, found it!

          Here i am april 26th 2018

          https://statmodeling.stat.columbia.edu/2018/04/26/quick-rule-thumb-someone-seems-acting-like-jerk-economist-will-defend-behavior-essence-morality-someone-seems-something-nice/#comment-716626

          “The “heuristic bias” (also known as the “bias bias”):

          When you quickly try to explain the behavior and/or reasoning by others through means of a certain specific “bias”, hereby not really engaging anymore in rational, objective, and critical discourse and/or reasoning about said behavior and/or reasoning by others.”

        • “I’m sure thousands of people have used this phrase/idea, so don’t feel too bad/cheated.”

          I don’t feel cheated/bad. As for your sentence: “I’m sure thousands of people have used this phrase/idea (…)”, i think it’s not just the term itself, but also the definition/interpretation that should be taken into account when talking about this all.

          Thank you for your comment though, which allows me to point out that i am NOT suggesting Gigerenzer did not think about this himself, or “stole” anything from anyone, or whatever!

          I just thought it was funny, especially since i wrote in april 2018 that “Perhaps that will be another bias soon to be discovered, investigated, and talked about.”

  3. For instance, loss aversion is often presented as a truism; in contrast, a review of the literature concluded that the “evidence does not support that losses, on balance, tend to be any more impactful than gains” (Gal and Rucker, 2018). Research outside the heuristics-and-biases program that does not confirm this message—including most of the psychological research described in this article—is rarely cited in the behavioral economics literature (Gigerenzer, 2015).

    I personally consciously try to learn *only* from success/gain and never from failure/loss. There are so many reasons for something to go wrong that are beyond your control, and conversely it is much harder for the stars to align to repeatedly result in success. So, a good heuristic is to attribute the failures to external causes but success to internal.

    I learned this from closely observing and quantitatively modelling the behavior of some rats… and then tested it on myself. I first used it successfully to greatly improve my performance on simple tasks like darts and free throws. Now I apply it generally.

    • An interesting heuristic.

      Being afraid of everything that can go wrong has some value, but is not very efficient. It can be more efficient to just find something that works and do that.

    • If you want to extract useful knowledge from and activity or event you must correctly identify the processes that control it. If you don’t do that correctly, you won’t learn anything from either success or failure.

      What do you learn from the success or failure of a single investment? Probably nothing. Same with weather predictions. What’s to explain which soldiers from Pickett’s division survived Pickett’s charge in the Battle of Gettysburg? Some aspect of their personal behavior on the battlefield? Highly doubtful.

      • If you want to extract useful knowledge from and activity or event you must correctly identify the processes that control it.

        Not at all, if you were correct machine learning would not work.

        • For the record, I assumed Anoneuoid was being sarcastic. I was going to write “What makes you think machine learning works?” and then realized that that’d be missing the point. And I was going to point out that Newell and Simon pointed out that local descent search can’t possibly work around 1970*. But that’d have made my missing the point even more grody.

          My history paper in 1982 (for my MA in East Asian Studies) was on the inanity of looking for cycles in Japanese economic activity that crossed the destruction of WWII, (If memory serves, the economists were into “Kuznets Cycles”, which drove the historians nuts.), so I have a long history of ranting about this…

          *: They had a cartoon drawing of a robot climbing a tree screaming “I’m getting closer to the moon!”

        • Ha, no worries, really you were both right, programs can “learn” from recognizing patterns, with no knowledge of the causes of the patterns, and I think we all know that’s how humans often learn.

          When I wrote the original comment I was thinking about one-off or less common events, where knowing the causes is relevant and people often misattribute causes to things that are completely or mostly irrelevant. (i.e., “well I think I lived to be 103 because I ate three tomatoes and one banana every day for breakfast for 100 years”)

        • Anoneuoid:

          Machine learning does not work in the absence of prior knowledge. This is something I find fascinating and reasonably wide spread — an apparent belief that it is the machine and algorithm doing all the work and producing all the knowledge. Consider the application of AI to the development of artificial limbs that can be controlled by the body. Training of the AI algorithms involves simple (0,1) or (miss, hit) which argues to your point, but stops short of understanding how this was accomplished.

          It would not have been possible without the biological knowledge that went into understanding how the body controls muscles. The same is pretty much true of any application of AI. Knowledge does not develop in the absence of prior knowledge and arguments such this suggest it does. Though perhaps there will be this ability some day — many are predicting it is coming in the not too distant future.

          Behaviorism can reasonably model/capture the process of instrumental learning within a constrained environment and even incremental creative acts derived from the introduction of novel information, but it cannot model what is happening cognitively as well as learning theory can with the insertion of the organism and the thinking and emoting brain/nervous system between the stimulus and the response.

        • How each organism produces the (misses, hits) could be as different as what each does with the information gained. While there will be much overlap, there will also be much variation caused by differences attributable to causal processes within the organisms.

        • The network trained to play mario kart has no idea why it is choosing to turn left or “push A”, but of course just the fact it is analyzing a screenshot and choosing to send that type of input to the game is due to prior knowledge.

          So sure, but then the original claim is trivially meaningless because there will always be prior information involved in anything.

          And personally I think the “cognitive revolution” was a huge step for psychology because the behaviorists were trying to identify “laws” (eg, https://en.m.wikipedia.org/wiki/Law_of_effect) and then try to derive explanations for why these laws should exist. For whatever reason this seems to have since stopped.

        • Causality is anything but trivial. It is essential. Causation is both necessary and sufficient to build from and without it there is nowhere to begin and nothing to explain and nothing to learn. Mario cart may be trivial in that it is a game but similar applications of AI within the medical or defense industry require vast amounts of prior knowledge and interpretation of results to make real progress. AI can easily learn its way thriugh a maze but cannot currently develop a rocket without a hell of a lot of human intervention and clustering algorithms of natural language do not produce some definitive version of truth in the absence of an understanding of the causal processes at play. Apparent association may be an interesting place to begin but it is silly place to stop.

          Behaviorism fell short because people do not react the same way to the same stimuli in all situations. Humans are capable of thinking about their thinking and questioning what they are learning and how and in enacting their agency after such analysis. Behaviorism attempted to skip this part of reality by describing the cognitive processes as epiphenomena without causal power within the chain of events. The cognitive revolution was in part a reaction to this absurd assumption.

          Classical conditioning works. Instrumental learning works. However, social learning theory brought some important pieces missing from the theories of Skinner.

        • I’ve personally never gained anything from thinking about causality. It seems like an impotent concept to me… Lots of paragraphs generated about it with little of actual use as a result. Like these DAGs that no one has a real life use for.

          What important pieces have been arrived at via social learning theory?

        • If you are not engaged in causal thinking your work is trivial by definition. Without causal determination there could not be any such thing as science.

        • you may not think about it explicitly but the preferred methodology you have expressed in the past implicitly favors causal models. For example making quantitative predictions about the future and then testing to see if they come about. An important aspect of causality is the one way arrow of time. Another aspect is mechanism. DAGs seem to me to be a way to sort of pseudo-mechanistically model things with inclusion of the one way time stuff. Basically a DAG takes a dynamic process and hopes that the Dynamics will have fast equilibration times and models the equilibrium states… not useless but far less useful than mechanistic Dynamics imho

        • Curious, there are many many nontrivial tasks to be carried out which do not require causal models. For example how many ambulance driver’s should we have working each day during the summer? Where should we store firefighting equipment to minimize cost and time to deployment? If we have measured air pollution at a network of stations what is the estimated pollution level at other locations? How much rainfall will there be this year in a certain county? If we measure some number of fish at a point in a river what is the population in the upstream lake? If we measure the roughness of a road through time at what point should we resurface it to minimize the cost of transport and maintenance put together…

        • Daniel:

          I agree that not all work requires a formal causal model. What I am trying to highlight is the importance of causal thinking as a disciplined approach to thinking about how to understand and model a problem. Let’s take your example of staffing based on season. My argument is that introduction of causal thinking may improve the model and improve the precision of staffing decisions. Similarly to YoY revenue estimates within the hospitality industry, the number of Saturdays in a month will directly affect both the summated and averaged values when comparing. Inclusion of this variable will improve the estimate of the YoY effect and will also improve the estimate of the number of drivers needed to be “on-call” during a given month so that HR can staff properly ahead of time.

          The reason I believe engaging in a disciplined process of causal reasoning is important is the same reason I believe engaging in a disciplined approach of critical thinking. Without consistently asking ourselves about the assumptions underlying an argument, when the argument is consistent with our prior beliefs we tend not to ask the questions unless it is a part of our process to do so. I further believe this is one of the fundamental problems in psychology currently where creating causal models is viewed positively, but real causal thinking often is secondary and often simply assumed.

        • you may not think about it explicitly but the preferred methodology you have expressed in the past implicitly favors causal models. For example making quantitative predictions about the future and then testing to see if they come about.

          Any causality is embedded in the assumptions used to derive the model, sure.

          I guess I’m just repeating an earlier discussion on here but whenever I see that word “causal” I know some type of high level philosophical discussion that goes nowhere is on the way.

          Everything in my past timecone has come together to cause me to type this post. Ok… Now what?

        • Curious said,
          “Without consistently asking ourselves about the assumptions underlying an argument, when the argument is consistent with our prior beliefs we tend not to ask the questions unless it is a part of our process to do so. I further believe this is one of the fundamental problems in psychology currently where creating causal models is viewed positively, but real causal thinking often is secondary and often simply assumed.”

          +1

        • What can a formal consideration of causality (eg, a DAG) add to an analysis like this:

          Netflix has been losing third party content, making their UI more annoying, and seem to keep adding annoying forced social commentary into their self-produced content. Despite this drop in quality, they keep raising prices on a pretty saturated US market. Meanwhile there is new competition on the horizon. Therefore they should see a drop in users, and hence revenue, which should translate into a lower stock price.

          Conversely, they still have a lot of room to grow in foreign markets and seem to have an unlimited line of credit for some reason. These forces would tend to increase the stock price.

          Conclusion: ???

        • Anoneuoid:

          As currently framed the causal analysis could provide insight in two areas. A

          1. Why is Netflix losing content? Where is it going? Why is it going there?

          The answer to these questions could help Netflix decide if they need to change their business model or attempt to acquire some of the new competitors.

          2. Why hasn’t netflix made inroads into these foreign markets already? Do they lack the ficused attention on these areas? Or are their political forces including domestic competitors that make it difficult?

          Answers to these can also help guide strategy.

        • Anoneuoid:

          As currently framed the causal analysis could provide insight in two areas. A

          1. Why is Netflix losing content? Where is it going? Why is it going there?

          The answer to these questions could help Netflix decide if they need to change their business model or attempt to acquire some of the new competitors.

          2. Why hasn’t netflix made inroads into these foreign markets already? Do they lack the ficused attention on these areas? Or are their political forces including domestic competitors that make it difficult?

          Answers to these can also help guide strategy.

          I mean netflix is losing content because they don’t want to pay rising licensing fees. The fees are rising because the companies that own the content are coming out with their own competitors to netflix.

          This is just like common sense though? There is nothing “formal” (eg, in the form of DAG) about that.

        • Anoneuoid:

          I was not arguing for a DAG, but you can certainly code different reasons that business is being lost into a DAG and estimate the likely effect for future business. I think this is one of the problems we see with assumptions about causal reasoning — “it just makes sense” rather than trying to estimate the effect of causal changes on variables of interest.

        • It’s a bit like the effect of the number of Saturdays in a month. For hotels that rely primarily on business travelers, this would mean a reduction in revenue while those that rely more on weekend guests an increase. Context is important and understanding how context affects estimates will improve models and decisions made from those models.

        • you can certainly code different reasons that business is being lost into a DAG and estimate the likely effect for future business… trying to estimate the effect of causal changes on variables of interest.

          Yes, how would you do that given the information in that analysis?

          Say I have 100 shares of netflix and will need to sell at some point before the end of 2020. Should I do it now or wait?

          I just want a real life example. I could use something like a DAG, but why?

        • Anoneuoid:

          I gave multiple real world examples of how causal reasoning can improve decision making above.

          If you are using ONLY the information from your description of Netflix above — then I recommend you flip a coin as it would be as likely to produce a positive result as would analyzing that limited amount of information.

        • Anoneuoid:

          I gave multiple real world examples of how causal reasoning can improve decision making above.

          If you are using ONLY the information from your description of Netflix above — then I recommend you flip a coin as it would be as likely to produce a positive result as would analyzing that limited amount of information.

          I am assuming this is “Curious”, and no you didn’t (quote it if so)… And of course that info is insufficient to make a wise trade. It ignores macro conditions, etc. Still someone could use it to make a trade, so it will suffice as a simple real life example.

        • FIRST:

          As currently framed the causal analysis could provide insight in two areas. A

          1. Why is Netflix losing content? Where is it going? Why is it going there?

          The answer to these questions could help Netflix decide if they need to change their business model or attempt to acquire some of the new competitors.

          2. Why hasn’t netflix made inroads into these foreign markets already? Do they lack the ficused attention on these areas? Or are their political forces including domestic competitors that make it difficult?

          Answers to these can also help guide strategy.

          SECOND:

          You cannot consistently predict the success of that trade using ONLY that information. It’s simply not possible.

          You can absolutely use any information you want to make a trade — that does not mean it was sufficient with which to make a better or worse decision. That information is not sufficient to consistently make a better or worse decision.

        • An analysis you could possibly do is:

          1. Softening of the domestic market decreases stock value.
          2. Opportunity in foreign markets increases stock value.

          You could use some estimate of an average effect of expected change over a period of time.

          p = b1*previous_domestic_sales + b2*previous_foreign_sales

          The problem is that I have 4 unknowns: 2 parameters & 2 predictors and only information about their sign. So I know that b1 is positive, but no idea about the magnitude and some idea that the previous sales were a lot (billions). I know that b2 is negative and literally no idea about foreign sales and no idea about the magnitude. If b1*domestic_demand > than b2*foreign_demand then sell. Else if b1*domestic_demand < b2*foreign_demand then hold.

          As this demonstrates, there is not enough information with which to determine which is greater.

        • I meant to write:

          If abs(b1*domestic_demand) > than abs(b2*foreign_demand) then sell. Else if abs(b1*domestic_demand) < (b2*foreign_demand) then hold.

          Again this is assuming b1 is negative and b2 positive.

        • Well, the stock is plummeting right now after hours due to a bad ER. They got half the new subscribers they predicted last quarter.

  4. I find Gigerenzer quite convincing.

    Declarations that people are “irrational” are usually based on comparisons to some simplistic notion of what is “rational” in some greatly simplified academic example. Often, just a little thought is enough to show that the “rational” benchmark is woefully naive. Often, just noting that there are transaction costs is enough to discredit the “rational” benchmark.

    • Sadly, this line of logic can lead to a dark conclusion.

      Logically analyzing an issue and coming to an independent conclusion is costly and beyond many people’s abilities. It can also fragment a group making the group weaker and harming everyone in the group. So, unreflectively going along with your group can be rational. This would explain why people are seldom amenable to suasion by logical argument or facts (a long-standing frustration of mine).

      The dark side of this is that it could mean that much group conflict is intractable. This would mean Diversity is Bad. You would want your tribal enemies to believe that Diversity is Good, but you would want your tribe to believe that Diversity is Bad.

  5. I did find the Gigerenzer article intriguing and useful. However, I am not convinced. For one thing, he always seems to harp on one-upping Kahneman. I guess this is a contest among psychologists, but I find it rather tiring since I think they both have valuable things to say – and I believe they can both be correct. He also seems intent on taking the behavioral economists down a peg or two. I don’t have a problem with that as they probably deserve that. But he risks displaying exactly the behavior he is criticizing. I can agree that the paternalism (e.g., “nudging”) that behavior economists often defend is not really supported by the evidence. But I don’t think the opposite (people make intelligent decisions and should be left alone to do so) is any more supported by the evidence.

    Much of he evidence he cites seems to involve regression to the mean being mistaken for irrationality. I can understand how randomness can account for regression to the mean in people’s heights. But, to apply that to people’s beliefs (such as their assessment of risks) seems to be relying on a degree of randomness in their beliefs. If beliefs are random (in the manner of heights), then isn’t this evidence of a sort of irrationality?

    Similarly, his analyses of random sequences (of heads and tails, for example) in the presence of small sample sizes, is thought-provoking. But the fact that people’s preferences may match what a proper probability estimation provides is not quite the same thing as showing an intelligent and sophisticated reasoning about risk, as he seems to suggest. It could just be luck.

    He also shows a number of ways in which risk judgements can be improved if information is provided differently (e.g., replacing probabilities with statements such as “out of 100 people with this disease, 10 will be cured”). I’ve seen his work on these communication methods and I find it excellent. But does it really contradict the bias literature? If there are methods that can reduce biases, they are to be recommended. But that does not contradict the bias to begin with. This is where he seems more intent on bashing Kahneman than on making a positive contribution.

    In the end, I am not convinced though he has given me a lot to think about. I guess I have a hard time buying the idea that humans show great intelligence and sophistication in risk assessment. When they have plenty of experience they may certainly exhibit this (hot hand, anyone?). But, when faced with novel situations, I see plenty of evidence that they exhibit many of the biases (though I am admittedly guilty of mood affiliation here).

    • “He also shows a number of ways in which risk judgements can be improved if information is provided differently (e.g., replacing probabilities with statements such as “out of 100 people with this disease, 10 will be cured”).”

      I’ve got a quibble with expressing a probability as “out of 100 people with this disease, 10 will be cured”. I”m OK with something like, “On average, out of 100 people with this disease, 10 will be cured.”

  6. “I guess I have a hard time buying the idea that humans show great intelligence and sophistication in risk assessment. ”

    To know how well humans apply risk assessment you’d have to know every risk they assess and how successful they are at assessing each risk. Moreover, you have to avoid biasing your assessment of their behavior with what you think is the *proper* behavior, which you’re forced to decide without a complete understanding of their individual predicament. Last but not least you must ensure that your “proper” choice itself results from a complete assessment of the situation, and not from your misunderstanding of it.

    The fact that humanity is broadly successful argues against the idea that humans generally assess risk poorly.

    • I agree with taking such an evolutionary view. From that vantage, humans would seem to assess risks well for risks they have been exposed to through most of our history. This, I believe, lies behind some of what Gigerenzer refers to – such as our remarkable ability to process visual information. However, many of the risks we now face (and are the subject of analysis), such as public policy towards climate change or nuclear disarmament, do not seem to have had enough time for our biological abilities to have evolved to assess. It is precisely these kinds of decisions that seem (to me) to be prone to the various biases. Do we really have evidence that humanity has been broadly successful at assessing these relatively recent risks? And, if so, can we possibly believe that evolution has accomplished that?

      • Good point. I cringe when I hear people say, “evolution is survival of the fittest”. That overlooks the question, “fittest in what sense?” — when, in fact, “In what sense?” is a moving target. So I describe evolution as “survival of those fit enough to have survived under the circumstances in which they have heretofore found themselves.” Of course, there might be some survival skills which transfer from one circumstance to another — but there is no guarantee that skills at survival “so far” will work for future circumstances.

        • “but there is no guarantee that skills at survival “so far” will work for future circumstances.”

          But in fact there *is* a very powerful mechanism that ensures that humanity will survive: diversity of opinion, belief and response. n groups of people respond in n different ways, dramatically increasing the likelihood that at least one of n groups will be “fit” for some new circumstance.

        • Jim said,
          “But in fact there *is* a very powerful mechanism that ensures that humanity will survive: diversity of opinion, belief and response. n groups of people respond in n different ways, dramatically increasing the likelihood that at least one of n groups will be “fit” for some new circumstance.”

          This has a good point, but is too strong to fit reality.

          The good point can be stated as: “n groups of people respond in n different ways, increasing the likelihood that at least one of n groups will be “fit” for some new circumstance.”

          The “too strong to fit reality” aspects:

          1. No evidence is provided to support the word “dramatically”

          2. The words “very powerful” and “ensures” in the statement, ” there *is* a very powerful mechanism that ensures that humanity will survive” — no evidence is given to support these words in this context.

      • “the risks we now face…such as public policy towards climate change or nuclear disarmament, do not seem to have had enough time for our biological abilities to have evolved to assess. ”

        Wow! That’s a spectacular claim.

        There hasn’t been a nuclear attack since world war two. So what’s the evidence that humans aren’t handling nuclear materials in their own best interests?

        There are clearly some impacts from climate change, but up to now they certainly aren’t delaying the progress of humanity or anything close to that – much less worth foregoing all the benefits of fossil fuels to mitigate. I think you’d be hard pressed to show that the use of fossil fuels up to today, and certainly for many decades to come, has had anything other than a massive positive impact on humanity. Certainly, almost nothing you know in the modern world, not even hydropower, would exist without them.

        • You have missed my point entirely. I am not saying that these are necessarily areas where we have made mistakes or are making mistakes – nor am I saying things are fine. What I am saying is that these are the types of threats that the defense “that humans are broadly successful at assessing such risks” has no evolutionary basis. And, these are areas in which I think there is plenty of reason to suspect humans as exhibiting many decision biases – both over-reacting and under-reacting, overestimating and underestimating, etc.

        • Quote from above: “What I am saying is that these are the types of threats that the defense “that humans are broadly successful at assessing such risks” has no evolutionary basis”

          I see your point, i think, but i would like to make the following comment. I am unsure if i make any sense in the following, because i always find this evolution thing complicated to understand and i am not sure if i am understaning the point of discussion correctly.

          I would reason it could be possible, perhaps even very likely, for certain mechanisms, processes, and abilities to have developed in humans that make it so that these processes and abilities can be used to deal with things never before encountered. It seems to me that humans have encountered “new things” all the time, in one way or another. Perhaps you could even say that “humanity” has a perfect record of being able to deal with “new things” because “humanity” is still here.

          If this makes (any) sense, could you then not say that “dealing” with “new” and “modern” things like climate change or nuclear disarmament 1) is very likely based on mechanisms, processes, abilities, etc. that have been devoloped through evolution, and 2) is very likely to be dealt with “succesfully” given “humanity’s” track record thus far?

        • you are a “glass nine tenths full” kind of person. Sure you could say that. I wouldn’t, but you could.

        • I would be willing to say that “dealing” with “new” and “modern” things like climate change or nuclear disarmament 1) *might be* based on mechanisms, processes, abilities, etc. that have been developed through evolution, and 2) *might be* dealt with “succesfully” given “humanity’s” track record thus far — but I see no evidence to support saying “is very likely to be” instead of “might be”.

        • I don’t know if I agree that nukes / climate are a fundamentally new kind of issue. Convince me.

          Here’s my argument: If you read the bible you’ll find all sorts of wanderers preaching the wrath of God for various offenses of rulers and people. Of course we know now that’s all fiction, but did people know it was fiction then? No, they almost certainly didn’t. From their perspective the Wrath of God in 400 BC and Nuclear War 2020 AD wouldn’t be different: predictions about the future that, given the knowledge of the time, are equally likely to occur.

          Also there *is* a mechanism by which humans survive and thrive in the face of the unpredictable future: diversity. The wide range of human ideas, especially combined with modern education and communications technology, ensure that as the future approaches, the least realistic ideas are dumped and the more realistic ideas remain and become acted on as necessary. Some groups of humans will always go down on the Titanic in the mistaken belief that it can’t sink. But always there are many who accept the practical reality that the ship is in danger, and act accordingly. The probability that *everyone* will be fooled by reality to the point that the species is destroyed is almost zero.

        • I understood your point.

          How do you know it has “no evolutionary basis”? We can’t make that claim until we know the outcome of the particular problems. And what we know of the outcome so far shows that we are , for whatever variety of well- or ill-conceived reasons, continuing on a positive trend. And that suggests to me that our risk assessment mechanisms are robust to such problems.

        • Hi Dale,

          I guess in general I’m not convinced that climate/nukes are a new category of threat. See my comments to “Martha (Smith)” below. I’m open to convincing but I think that has to be done in light of previous threats and *the perspective of the people in that time (or place)* as to the nature of the threat.

          But I guess my counterpoint to your comment was that humans *have already been* broadly successful at addressing the risk of both climate and nuke war, as evidenced by general improvements in well being throughout the period since these threats have emerged, and thus have *already shown* that their sensors are adequate to cope with them.

          There’s no way we can know whether humans can cope with *any* future threat until after the fact, so we can always speculate that there is no mechanism for some threat or other.

        • Perhaps it would help to choose a more mundane example (I really didn’t mean for nuclear war or climate change to become the focus – although they are certainly worthy topics to discuss). A company is considering the launch of a new product – many uncertainties are present. Do we believe that evolutionary processes have prepared the decision makers to assess the risks well?

          My answer is that evolution is very unlikely to have worked in this way (no, I can’t prove that, but my layman understanding of evolution is that it takes too long to have accomplished what is required). Instead, I believe many of the biases that Kahneman and others have investigated are likely to be relevant in a case like this. Gigerenzer’s claim that decision makers are sophisticated assessors of risks like this seems unwarranted to me. The exception would be if a decision maker had faced many decisions like this in the past – sort of like Gladwell’s (Blink) examples where emergency responders act using heuristics – honed from ample experience.

          Debating whether humans have shown evidence of superior risk assessment skills or not seem to me to be fruitless. We probably believe different things and I don’t think either position can “prove” their case. For all the evidence that humans have survived (thus far and under these conditions, a la Martha Smith above) there are plenty of examples where mistakes were made. My point would be that I don’t think our survival can be taken as evidence of an evolutionary process working on these risks. I do think our survival of attacks by predators (again, thus far and under these conditions) can be seen as evidence of evolution at work. Our brains, eyesight, etc. have adapted (through the process of selection) to protect ourselves against such risks.

          I just find it hard to accept that selection has worked to provide humans with a superior ability to assess risks such as the introduction of a new product. And, if you want to claim that the “market” is an evolutionary mechanism that accomplishes that task, then it might – but again, I don’t think that is an example of evolution.

          This reminds me of an issue that I faced years ago – I pondered what the difference was between a beaver dam and Hoover dam. Scale, obviously, but is that all? A famous ecologist told me “man does not operate by the law of natural selection.” I never understood how that could be possible. Indeed, I don’t think it is. But to claim that everything we do is natural selection gets us nowhere. Natural selection works over the long run and any particular adaptation can be viewed as a random experiment – and one which has a low probability of success. So, I simply don’t believe we have any basis to claim that our current “success” at assessing novel risks is due to natural selection – or the opposite. In other words, I don’t believe that either Kahneman or Gigerenzer is right or wrong. I find both have valuable things to say and I’m looking to reconcile what has been presented as a definitive judgement that these biases are themselves evidence of bias.

        • “A company is considering the launch of a new product – many uncertainties are present. Do we believe that evolutionary processes have prepared the decision makers to assess the risks well?”

          I’m totally puzzled now! :) I don’t get how launching a new product has some element of risk that’s never been encountered before in human history. It seems pretty mundane compared to getting water from the waterhole without getting eaten by a lion. Or compared to launching a canoe on the open ocean to migrate across the water. Or compared to trying to haul down a killer whale – let alone knowing how to find one.

          It seems like modern people often equate the ancient mind with activities like throwing spears, which require mostly snap judgement. But ancient humans also had to excel at planning over days, weeks, months, seasons and years. In doing so, they likely became adept to switching time scales (so adding one more step in the scale wouldn’t be difficult) and probably developed mental models of risk analogous to sets of similar equations that could be fine-tuned rapidly by adjusting “constants” and “coefficients.”

        • In the Gigerenzer article [posted above] Gigerenzer presents two specific claims about researchers’ bias bias which I don’t see addressed in this thread. Gigerenzer ventures that one motive has been an ‘academic agenda to question the reasonableness of people’ in various contexts. Here Gigerenzer relates this claim to Thaler & Sunstein proposal to ‘nudge’ people into thinking better. Governments, for example, can fulfill this role, popularly referred to as ‘libertarian paternalism’.

          The second more specific claim is when a commercial agenda is on the court docket like the Exxon Valdez oil spill. In that case, Exxon ‘funded a new line of research using studies with mock juries that questioned jurors’ cognitive abilities. On appeal, Exxon argued that jurors ‘are generally incapable of performing the tasks of the law assigned to them in punitive damage cases.’ [page 307]

          Gigerenzer relates these two claims to a broader agenda: to promote ‘trust in abstract logic, algorithm, predictive analytics over human intuition & expertise.’ This trust was debunked in one study about the ‘COMPAS algorithm used in courts predicted no better than ordinary people without any experience in recidivism and had a racial bias.’ page 307.

          This one section on page 307 I think could have been expanded by way of more examples where cognitive biases have been hijacked to mitigate liability and accountability.

        • Jim said,
          “It seems like modern people often equate the ancient mind with activities like throwing spears, which require mostly snap judgement. But ancient humans also had to excel at planning over days, weeks, months, seasons and years. In doing so, they likely became adept to switching time scales (so adding one more step in the scale wouldn’t be difficult) and probably developed mental models of risk analogous to sets of similar equations that could be fine-tuned rapidly by adjusting “constants” and “coefficients.””

          It sounds like you have no background in anthropology, which has demonstrated that a lot of things that one culture takes for granted (and as universal for humans) may not be parts of other cultures. More generally, you wrote “likely” and “probably” with no data nor solid reasoning to back that up — just your opinion, but nothing that might convince someone else. (in particular it sure doesn’t convince me.)

        • The Babylonian number system was base 60 so that calculations involving celestial events which they estimated to be approximately 360 day periodic could be carried out successfully thereby minimizing risks of crop losses by timing the growing seasons. Accounting systems allowed them to keep track of who owed who over time. The Egyptians were able to lay out pyramids using surveying techniques that ensured a multi decade pyramid construction came out properly square based. They learned what steepness to set the slope of the pyramids by observing failures of the bearing capacity of the soil from earlier too steep pyramids. All of this occurred more than several thousand years ago.

        • “which has demonstrated that a lot of things that one culture takes for granted….”

          “it sure doesn’t convince me”

          C’mon, Martha! :) There’s ****TONS**** of data showing humans can think at a variety of time scales!

          *Many* human cultures live by daily, seasonal and annual patterns which would certainly give them the ability to think on different time scales.

          Buy the bronze age, though, many different cultures already had sophisticated calendars: https://en.wikipedia.org/wiki/List_of_calendars

          And check out the Mayan calendar, which has units ranging from one day to ***23 billion days**** !!! https://en.wikipedia.org/wiki/Maya_calendar

        • What I was taking issue with was “and probably developed mental models of risk analogous to sets of similar equations that could be fine-tuned rapidly by adjusting “constants” and “coefficients”.

        • Oops — managed to cut off part of what I was taking issue with. It should have been:

          “(so adding one more step in the scale wouldn’t be difficult) and probably developed mental models of risk analogous to sets of similar equations that could be fine-tuned rapidly by adjusting “constants” and “coefficients.””

        • OK, fair enough, when I say “rapidly” I don’t mean instantaneously.

          As a geologist I’ve taught the geological time scale many times and people – including me – find it difficult to scale up to a million years, let alone a billion years. It’s just hard to grasp those proportions. And as always some people do it better than others. Think of anything humans can do and there’s always a few people who are stupidly good at it.

          But I do think the average person can manage the range of time scales that are within the common human experience – seconds to decades – and can make the adjustment between scales fairly easily.

        • Thank you, Keith, for that Rubin et al article. I was contemplating that if Gigerenzer had restructured his article differently, to provide examples where the resort to explicating cognitive biases has resulted in mitigating damages and avoiding culpability, the article would have been a blockbuster. I have noted that rarely does anyone so presents a convincing explanation of a particular use of bias in a specific context.

          In international relations related issues, the heuristics/biases literature can be useful when they are discussed in collaborative settings. Fine when contexts are relatively simple.

          Since 2009, in several technology conferences/workshops, I’ve raised the question of assumptions in algorithms. Simply b/c I was not familiar with the subject matter.

      • Strictly speaking time is irrelevant. In evolutionary terms, the question is whether there has been exposure to the risk that would lead to extinction. We have been exposed to nuclear weapons technology and didn’t wipe ourselves out. There has been sufficient “time” then to establish that the variations of humans now alive are capable of handling nuclear weapons, but tomorrow there may be a confluence of factors never before experienced by humans that lead to our extinction even by nuclear war. That is to say, we do not evolve to “fit” our environment, we simply survive. Those that don’t make it, didn’t have what it took. If no one makes it, that species was not “fit.” Time is irrelevant, except in the sense that there needs to be enough time to expose members of the species to enough risks that there survival likely means they are fit for the many variations in a particular environment. But, still tomorrow may bring an extinction event.

        • This is certainly not my expertise, so perhaps someone can clarify for me. I can’t understand how time is irrelevant. I don’t believe there is a concrete specific threshold of time at which evolution works, but I thought that it required genetic adaptations that take generations – generally many generations. I guess bacteria evolve rather quickly but I always thought mammals took quite a bit longer for evolution to produce the kinds of changes that would be associated with developing a superior sense of risk assessment for abstract risks such as the ones calling for explicit probability judgements. Can someone shed some light on this?

        • Dale said,

          “I thought that it required genetic adaptations that take generations – generally many generations. I guess bacteria evolve rather quickly but I always thought mammals took quite a bit longer for evolution to produce the kinds of changes that would be associated with developing a superior sense of risk assessment for abstract risks such as the ones calling for explicit probability judgements. Can someone shed some light on this?”

          Here is an attempt to. shed some possible light:

          Re “I thought that it required genetic adaptations that take generations”:
          It might in some cases, but in other cases, there might be genetic diversity within a species, and some new environmental factor (e.g., a disease that spreads to a population that had not experienced that disease before) that quickly kills off a lot of individuals that have (or don’t have) one particular variant of a gene.

        • ” always thought mammals took quite a bit longer for evolution to produce the kinds of changes that would be associated with developing a superior sense of risk assessment for abstract risks ”

          There are a lot of different mechanisms of adaptation. One mechanisms is pre-adaptation – that is, an organism has an existing characteristic that, due to some change in the environment, suddenly offers a significant advantage. Because abstract risk is conceptual rather than physical, it seems reasonable to suggest that the huge variation in human conceptual abilities would generate a significant size population that’s pre-adapted.

          Contrast this with, say, growing a longer neck, which, yes, would take many generations.

          But I’m still not convinced that there’s anything special about climate / nukes / product launches that makes them different than anything humans have encountered before. IMO humans have long been adapted to long term risks.

        • Jim said,
          ” IMO humans have long been adapted to long term risks.”

          “Adapted to long term risks” doesn’t make sense to me, since the phrase “long term risks” includes lots and lots of possibilities. I don’t see why being adapted to *some* types of long term risks would imply being adapted to long term risks in general. (i.e., there’s a big difference between “some” and “all”)

  7. Gigerenzer has long claimed that human rationality is a lot better adapted to the typical decision contexts we face than the bias-and-faulty-heuristics crowd thinks. I’m generally with him on this. The question arises, however, if we’re so good at assessing risks, etc., why do we find such glaring examples of failed or inadequate response to critical threats like climate change and war?

    A partial answer is that rational assessment is just one of several inputs into a decision process. Conformism, avoidance of cognitive dissonance (a.k.a. denial), rivalry, deference and other emotional triggers also play a role and can offset or override what reasoning alone tells us. In addition, we often encounter collective action problems that drive a wedge between what we individually understand to be beneficial and what we can collectively implement. And through a sort of reinforcement or “naturalizing” process, repeated outcomes or non-outcomes of public/collective behavior influence our framing and perceptions to justify them.

    I’m sure this is not an exhaustive list, but it’s enough to show that Gigerenzerism does not imply a world of rational behavior.

    • It’s also possible we’re incorrectly assessing what constitutes a “failed or inadequate” response to a threat.

      Especially in something like a war, can we really accurately assess the alternative that never occurred? Probably not most of the time. So we don’t really know if going to war was a failure or a success in risk assessment.

      Also we don’t need to capture the perfect or optimized response. We just need a response that gets a lot of people through. And as noted above, diversity is also important – it’s the diversity of views that allow us to succeed. Not everyone has to be right. We can also continually reassess and adjust our path as the apparent likelihood of different events varies.

    • Personally, I much prefer the second which, at least as framed by Cialdini, which directly addressing a number of your points,

      1. People aren’t stupid, but being perfect rational actors requires an irrational computational cost. So we have evolved to use shortcuts which, under normal circumstance (or at least normal in evolutionary time scales), work very well but which are vulnerable to manipulation.

      2. The small nudge/big impact claims that hold up generally come down to

      A. the subjectivity of impact(the results of a tie-breaker may have a trivial effect on a buyer/decision-maker choosing between two nearly identical options but can have huge consequences for the seller)
      B. path dependency, or
      C. cumulative effects. Those Jedi mind trick studies that don’t fall into these three categories usually collapse under scrutiny.

      3. Cialdinis best known book explicitly anticipates Gigerenzer’s framing, arguing that awareness of manipulation is a social good because it makes manipulation less effective thus allowing people to “reach their own goals.”

      4. It is also important to note that the amount of government money spent on these techniques is a tiny fraction of the billions upon billions spent on marketing and PR. If you are really concerned about people being herded like sheep, focusing on the public sector here is like worrying about second hand smoke on the Titanic.

  8. “Not only has it come to define behavioral economics but it also has defined how most economists view psychology: Psychology is about biases, and psychology has nothing to say about reasonable behavior.” This is true because economists have spent their entire careers defining reasonable behavior as utility maximization (or at least satisficing… with appropriate adjustments for the costs of decisionmaking.) Thus, psychologists have nothing to *add* to the definition of reasonable behavior. But they would be essential in delineating the boundaries of *unreasonable* behavior. In practice, when economists see behavior that looks irrational, their response was to imbed the decision problem is some larger problem in which the decision was not as irrational as it looked. (Andrew has often commented on the somewhat odd pop counterliterature in which economists lectured noneconomists about why they weren’t thinking closely enough about something.) If one could make unreasonable behavior a core brain issue in some circumstances, this standard paradigm could be expanded. But if people’s rationality is “good enough for government work,” as it were, the behavioral challenge is defanged.

  9. Timely, in my case. Of the many books I’ve read in an attempt to (a) be a better thinker and (b) eviscerate B.S. peddling expert witnesses, “Heuristics and Biases”, (Gilovich, Griffin and Kahneman) stands out as one I found especially enlightening. Recently I suggested it to my eldest son, pulled it off the shelf and flipped through to the chapter on anchoring (a particularly weird heuristic) to use as an example of the perils of automated decision-making.

    Here’s how that chapter begins: “Imagine walking down a supermarket aisle and passing an end-of-aisle display of canned tomato soup. A sign on the display says, ‘Limit 12 per customer.’ Would such a sign influence the number of cans you would buy? … Our intuitions say no, but empirical evidence indicates that purchase behaviors are influenced by such a sign (Wansink, et al, 1998).” Ruh roh.

    Pages and pages of numerous studies by even more numerous researchers follow; almost all with n ~ three dozen and “[p]articipants’ head movements also influenced the speed with which they generated their answers to the self-generated anchoring items … p-less-than .05” as a typical conclusion. So, I sent my son off with a different book and began to look at things in a bit more detail and in the light of things I (think) I’ve learned from this blog.

    By the time I was done I’d decided that I had originally fallen into a trap thanks to self-flattery by thinking I was so much smarter than all these students and doctors and researchers who kept being fooled by obvious cognitive illusions. The thing is, the sample sizes are mostly quite small and so are the effect sizes. This business of turning all these individuals, many and often times most of whom produced the correct answer, into so many marbles in an urn and outputting the data summary of their pureed scores as: “people are dumb, p-less-than .05” made me blind to the fact that lots of the people in these studies, many with no training in probability and statistics, had either learned or intuited the correct answers to some difficult problems.

    So now I’m wondering: why not figure out how they came by that ability and see if we can’t improve education in this area?

    • >> “Our intuitions say no, but empirical evidence indicates otherwise [Wansink, et al]”

      It is so funny and so sad and the same time…

  10. Something not exactly on the topic of this discussion, but related: An excerpt from an entry by Jonathan Seigel (who works on drug studies) in today’s ASA Connect:

    “Science vs. math. I think Deming’s first and most foundational challenge challenge is to think of the field as a science of variation, and of how to acquire knowledge in the face of variation, rather than a branch of mathematics. Mathematics deduces consequences from assumed truths. Science attempts to generalize and predict from what we observe. I think the best explanation of the practical difference is Lawrence Summers’ [sic] distinction between what he called the “smart people,” econometricians and the like who had lots of fancy mathematics to back up their theories and were the obviously right people to follow when he was a grad student if one wanted to be thought smart and to advance ones career, and the “stupid people,” sociologists and such, who had only a few empirical observations they were unable to fully explanation and were therefore obviously wrong. As Dr. Summers recounted, our world today looks much more like what the stupid people predicted decades ago than what the smart people predicted. Deming would say this means the stupid people weren’t so stupid. And maybe the smart people weren’t so smart. “

      • My skepticism of economists started when I was an undergraduate, based on two things: 1)The one Econ course I took was probably the easiest of all my college courses; 2) the Econ department seemed to court the B students in the honors math program (i.e., tried to persuade them to go into economics).

  11. Is anyone else struggling to understand Gigerenzer’s point about regression towards the mean, with respect to risk estimation? Regression to the mean is a consequence of X and Y-X being negatively correlated if X and Y are independent, right? But the argument seems to be that variation in people’s estimates would reduce the regression gradient somehow. Which doesn’t seem right, and the two variables are supposed to be correlated anyway, so I can’t see how it relates.

    • I found this part of the paper unimpressive. I didn’t understand it and I didn’t see why he needed such an elaborate explanation. A very simple explanation is information cost. Most people just don’t spend the time to learn the actual numbers for very small risks. There is no point for most people. You don’t have to know the death rate from botulism to know you should not eat spoiled food.

      These numbers are also very small and hard to calculate. 1,000 deaths is what percent of the total population? That’s a nasty calculation. And what IS the total population anyway? Is it deaths per year? What if you care about YOUR odds of dying from something? Then the calculation is even worse. You have to multiply by some number of years, so it matter if just old people die from it.

      I, personally have almost no idea what the numbers are for these small risks. Moreover, I have almost no idea what the ball park is.
      10 sounds small, but what about 1,000? 10,000? There are a LOT of people in the country.

  12. FWIW, I’ve always thought that a paragraph like “Disease X has a base rate of 0.1. For people with the disease, the test will correctly diagnose the disease 90% of the time.” was not likely to be read as precisely as necessary by people who didn’t take probability theory.

    In other words, people who took (and did well?) in probability theory are accustomed to the phrasing and the differences between what these terms represent. Other people may _think_ that they understood what they read, but many actually don’t. So, I’m not surprised that one the problem is restated using those “natural frequencies”, people in general do much better.

    Also, when it comes to identifying randomness, when phrased as “whether HTHTHT is more likely than HHTHTH”, I don’t think many people read the question as literally. I suspect that they regard HHTHTH as an “example” or a representative of “all sequences” that don’t exhibit a pattern, and therefore conclude that observing a sequence with a pattern ought to be less likely.

  13. Odd to see loss aversion discussed as an irrational bias. Surely it’s a characteristic of a preference ordering where that preference ordering is conditional on the status quo. Standard economic models assume that people optimize by adding up stuff and don’t care about changes in state, but what in the more fundamental logic of a rational actor model says that adverse change (loss) shouldn’t hurt disproportionate to the amount of stuff?

    • +1

      I think that sometimes the attraction to the concept of “loss aversion” as an “irrational bias”, is that it’s just a disguised way of saying, “If you don’t agree with me, then you’re irrational.”

    • I think loss aversion also has a social component. People are constantly trying to hornswoggle you out of a few bucks and so people are very attuned to identifying cheaters and can get very upset if they are chiseled. This aversion to being cheated or stolen from translates into what we call loss aversion.

      There is an analogous aversion to losing small amounts to non-social causes such as mice eating some of your corn or a high wind blowing the roof off your yurt.

    • I thought that the following article in the Guardian was quite informative.

      Nudge Economics: Has Push Come to Shove for a Fashionable Theory?

      https://www.theguardian.com/science/2014/jun/01/nudge-economics-freakonomics-daniel-kahneman-debunked

      Gigerenzer thinks that behavioral economics is in essence forging is ‘cover your backside’ framework that favors commercial interests and co-opts various domains including academia in this, with Kahneman and Thaler its ideological purveyors. Now, this doesn’t necessarily suggest to me, that Gigerenzer discounts the value of heuristics-biases insights. Rather he proposes a much-neglected framework by prominent academics themselves. More fundamentally, Gigerenzer is right to imply that those who reap the costs and consequences of commercial agendas can be an even better position to assess risk if given a little statistical literacy training. Well, that is great. But given the extent of statistical controversies, I am not sure where that is going. Having said that, I think that some are simply better diagnosticians regardless of their training. They are always accessible when such perspicuity is demanded.

  14. I’ve lost the thread, so I’ll place it as a new comment, but I am mostly responding to jim’s puzzlement above. Of course, humans have adapted to numerous threats and issues we have faced in the past. So, we have adapted to things like climate change, watering holes drying up, making plans for upcoming migrations, etc. But I don’t see how these relate to introduction of a new product. The examples all involve things that took place of many generations. Some tribes moved as the climate changed – others failed to do so (presumably the Anasazi, though admittedly, we are not really sure what happened to them). To compare new product introductions to such changes requires what is see as an enormous leap of faith in human evolution. Similarly, Daniel’s examples of the amazing ancient people’s abilities to use celestial reasoning and calculation do not convince me that such reasoning can be done quickly within one or two generations. The rate at which new technologies are now introduced (e.g. AI, genetic manipulation of human embryos, etc.) seem of a completely different time scale than the examples being provided about what humans have done in the past.

    Yes, it is possible that humans have latent genes that are just waiting to be activated by some environmental stress and which will turn out to be a successful adaptation to an environmental change. But it seems more likely (to me, at least) that the stresses will come more quickly than out ability to count on evolution to provide a successful response.

    In the end, it will be evolution that takes place and extinction is a perfectly common feature of evolutionary processes. I see little reason for optimism about human’s abilities to assess sophisticated modern risks without a considerable effort at Type 2 reasoning. Gigerenzer would appear to want us to believe that our heuristics are up to this task, and I am simply not convinced. And the discussion above does nothing to convince me further. Note that I am not saying that humans cannot or will not be able to asses these risks (though I am a glass 1/3 full kind of person), but I am saying that I don’t think are heuristics are up to this task. That is where I still find Kahneman’s work more convincing than Gigerenzer’s.

    • Dale, I think a strong case can be made that cultural evolution is faster and the primary means of adaptation in modern humans. Consider how varied cultural practices are, from say Amish or Hasidic Jews or Hunter gatherer tribes in the Amazon, to beauty influencers on YouTube or the role that text messaging on a 12 key phone played briefly in chngn how u tlk 2 frnds

      I don’t have much to say about the issue of heuristics, except to say that I’m skeptical of any claim to really understanding them among psych, because I’m skeptical of all of psych. Maybe in the future we will see cultural practices change so algorithmic analysis of risks becomes a religious commandment with a priesthood of applied math people chosen at a young age to wear special robes and live in temples…

    • Dale said,
      “Daniel’s examples of the amazing ancient people’s abilities to use celestial reasoning and calculation do not convince me that such reasoning can be done quickly within one or two generations”

      Daniel’s examples were:
      “The Babylonian number system was base 60 so that calculations involving celestial events which they estimated to be approximately 360 day periodic could be carried out successfully thereby minimizing risks of crop losses by timing the growing seasons. Accounting systems allowed them to keep track of who owed who over time. The Egyptians were able to lay out pyramids using surveying techniques that ensured a multi decade pyramid construction came out properly square based. They learned what steepness to set the slope of the pyramids by observing failures of the bearing capacity of the soil from earlier too steep pyramids. All of this occurred more than several thousand years ago.”

      Point 1: What allowed the cited Babylonian and Egyptian cultures to develop and employ these systems successfully? If I am not mistaken, Egyptian culture at the time the pyramids were built had strong rulers and slave labor.

      Point 2: What happened to the Babylonian and ancient (pyramid building) Egyptian cultures? If I’m not mistaken, the Babylonian Empire was short lived. Egypt “changed hands” many times since the era when the pyramids were built.

      • The examples were meant to show that humans have had plenty of time to develop techniques to analyze situations systematically on behalf of a risk management and long term thinking agenda. Also I believe one current theory on Egyptian pyramids was that they were a social services system rather than slave labor. The Pharaoh paid people wages during dry periods thereby stabilizing people’s living situations while they awaited the cyclical Nile floods that made the lowlands fertile for growing.

        All of this goes toward my point mentioned somewhere else about how cultural practices are the main evolutionary mechanism at work in human adaptation

        • If your point is that cultural evolution can take place much more quickly than biological/genetic adaptation, then I agree completely. But it is quite a stretch to then say that cultural evolution works in the same way – recall my question about the difference between a beaver dam and Hoover dam and the ecologist’s response that “man does not operate according to the law of natural selection.” I don’t think we are able to either support or falsify claims about whether or not cultural evolution works in that way – it will come down to our personal beliefs (just how full is that glass?).

          The other point I’d make about evolution is that many, if not most, species do not survive. It is impossible to point to current human success as anything other than “success” at this time and under these circumstances, as Martha has put it. So, when faced with a risk or stress, our culture adapts. After a half century of nuclear weapons we have created a United Nations, numerous arms treaties, monitoring technologies, etc. How can we judge whether this is a successful adaptation or not? And, to the original post about Gigerenzer’s paper, do we believe that humans have developed sophisticated and successful risk analysis/assessment concerning the threat of nuclear weapons?

          Again, I think it comes down to our personal belief system (Jim sees the glass as nine tenths full and I see it as nine tenths empty). And, this is where I don’t see a need for Gigerenzer and Kahneman to be at odds, except for their academic rivalry. I personally see much insight from the biases humans exhibit concerning uncertainty – I find much of that research compelling. The fact that heuristics enable humans to accomplish much – to enable System 1 to replace the effortful work of System 2 – I also find insightful and compelling. The question of the day (year, era, …) is to examine what circumstances our heuristics work well in and those in which it does not – and the reasons why.

        • Dale said,
          “The question of the day (year, era, …) is to examine what circumstances our heuristics work well in and those in which it does not – and the reasons why.”

          +1

        • Dale, I think you would like the book “Game Theory Evolving” by Herbert Gintis.

          The basic premise is that game theory tells us there are equilibrium and optimal ways to do things in systems we can model with game theory, like say the US Tax Code… basically if you organize your organization in a certain way and buy some of this, and set up that kind of trust or whatever, then you can reap benefits… But in any real-world system, the strategies tend to be complicated, and why should we expect that people do anything close to the optimal thing because it would take a lot of knowledge to figure out how to optimally cheat the tax code?

          His answer to that is that strategies are basically evolving through time as people discover perturbations to their current strategy that work to improve things, and then other people adopt that strategy when they see how effective the strategy is… So strategies are a population of ideas that exist within a population of people, and they can spread and grow and mutate and evolve.

          Take for example that guy James Holzhauer who did that big run on Jeopardy recently. He had a new strategy to play, which was to try to win a few big amounts so he could find daily doubles and double-down those amounts and rapidly blow up the amount of money he was working with in an exponential way… He was a professional sports gambler, so he had carefully analyzed this strategy himself… But now that it’s out there, does someone else have to understand how to come up with that strategy in order to employ it? No, they just have to understand how it works well enough to employ it themselves… This is really like a novel mutant legless lizard that is successful because there are lots of burrowing animals in the area, and so it has more babies who are also legless, and eventually after a few million years of refinement are snakes.

          So, I think it’s worthwhile considering how cultural evolution takes the place of genetic specialization in many cases… because culture affects reproductive success (physical) as well as cultural-reproduction (ie. adoption). Europe adopted Arabic numerals and dropped the use of Roman Numerals without becoming genetically Arab.

        • Are you equating strategy success on Jeopardy with evolutionary success of a lizard species? If so, it is quite a stretch. As I understand it, biological evolution is governed by traits that enhance the survival of a species. Success on Jeopardy is governed by strategies that win the game against other competitors. I suppose these are equivalent on some mathematical level of modeling, but that does not make them the same. I just don’t buy that successful strategy on Jeopardy is a relevant data point for human adaptation to modern risk. It could be, but it might not be. And, note that this particular example is the result of System 2 thinking, not the simple heuristics of System 1.

        • The basic requirements for evolution are:

          1) A population of objects.
          2) The ability to reproduce those objects with some variation.
          3) A selection mechanism whereby some are reproduced more than others.
          4) Iteration.

          So,

          1) there are a variety of strategies on Jeopardy.
          2) Suddenly one guy does very well with a new strategy, and others adopt it with some variations.
          3) Those who adopt the strategy stay on the game longer and win more money, a fact which is publicized among the contestants
          4) In the next season, more people adopt variations on the strategy….

          After a number of iterations, the population of strategies on Jeopardy which used to be mostly something else and just 1 guy doing the new strategy is now much more tilted towards the new strategy… evolution!

        • Martha’s link to the wikipedia page is helpful – the entire discussion is about genetics, mutation, and selection. Daniel’s succinct statement below shows that the adoption of strategies can be expressed in similar terms. But that similarity does not mean that the two circumstances are subject to the same forces and principles. The improved Jeopardy strategy will “win” over inferior strategies and “survive” more than them. But “winning” and “survival” are measured in terms of their monetary payoffs in the game. Genetic mutations will “win” and “survive” if they result in better survival for a species, measured by its ability to continue. It is easy to see the similarity, but I don’t think that means they work in identical ways. The mechanism that governs genetic mutation is not the same thing as that which governs strategy mutation. The time scales are entirely different and one mechanism is biological and the other is cultural.

          Perhaps I am wrong. Perhaps we can view cultural evolution as another variant on biological evolution. However, the latter makes no promises regarding which species will survive – and there are far more failed mutations than successful ones. The same could be said for cultural evolution. OK – I concede – I’ll agree to the commonality. But, returning to the Gigerenzer/Kahneman debate, I don’t see that this gets us anywhere. There is no basis to assume that we can identify which adaptations will survive – either cultural or biological – except after the fact. And, I see little reason to believe that System 1 heuristics should be viewed as successful adaptations to modern risks. But I do see reasons for these heuristics to be viewed as successful adaptations to biological risks. We learn to flee when hearing particular sounds or see particular images – and these responses seem to be triggered by modern risks that have nothing in common with those biological risks.

          An example: the invisible gorilla. Our human ability to focus and concentrate has served us well biologically. It permitted us to work with tools without extraneous distractions (unless they sound like a lion approaching). The invisible gorilla experiment shows how our ability to concentrate can blind us to relevant information – with applications that permit marketing ruses that have us concentrate on offers and not see the conditions that are attached. This seems like a poor assessment of risks. In terms of survival of strategies, the marketing strategies that employ deception are likely to “survive.” The consumers that fall prey to these deceptions do not fare as well (at least financially). The heuristic that served us well in the past seems to be counterproductive now.

          How will it turn out? The deceptive company might prosper or consumers who are not deceived may be the survivors and the company goes bankrupt. Either outcome seems equally likely to me and I don’t see how the evolutionary analogy helps at all.

        • Dale, the same thing happens biologically. There’s no reason to believe that adaptations that were successful in the past will ensure success in the future. The Spotted Owl maybe goes extinct because of forest loss, while the Black Bear is well adapted to scavenging in the garbage dumps at the edge of town…

          I don’t think the question is “are humans perfectly adapted to the risks they face today” but rather “are some humans more well adapted to risks they face today than others, and do their traits continue forward in the population more frequently?” So perhaps in the future we’ll find people who have say cultural beliefs that they teach to their children about the evils of unrepentant marketing, and how to avoid being scammed, or we will pass laws (a kind of culture) that allows us to punish people for making misleading but technically correct statements… whereas today that is not a common cultural practice. For example.

        • Daniel said:
          “The basic requirements for evolution are:

          1) A population of objects.
          2) The ability to reproduce those objects with some variation.
          3) A selection mechanism whereby some are reproduced more than others.
          4) Iteration.”

          Some quibbles:
          2) I’d say: A mechanism that reproduces those objects with some random variation.
          3) I”d say,: A selection mechanism that varies by time and location whereby some are. reproduced more than others and some die off completely.
          4) I’d say something more precise like, “Iteration with the populations of surviving objects of the preceding step, together with new objects created by reproduction.” (This still isn’t quite right — because it’s in some ways more like a continual process rather than having the discrete steps that are in the usual description of “iteration”.)

        • The quibbles seem fine, but I will point out that sexual reproduction is a kind of optimization, you don’t actually need it to get evolution, it just makes the whole system “more efficient”.

          In particular, if you’re going to include things like cultural evolution, you can’t enforce a sexual style of reproduction (that is, the combination of two “cultural practices” into another cultural practice that’s a mixture of the two for example). As long as an idea is reproduced and has some variation, and can be selected through iteration, the population will evolve.

          and, yes the discrete time approximation is not quite right either but it’ll do for first approximation.

        • The best theory, imo, is that the Egyptians used a type of limestone concrete, so all those big stones were cast on the spot. This apparently would have been an order of magnitude easier project if done that way:

          According to Guy Demortier [12], re-agglomerating stones on the spot greatly simplifies the logistic problems. Instead of 25,000 to 100,000 workmen necessary for carving [13], he deduces that the site occupancy never exceeded 2,300 people, which confirms what the Egyptologist Mr. Lehner discovered with his excavations of the workmen’s village at Giza.

          https://www.geopolymer.org/faq/faq-for-artificial-stone-supporters/

          Warning: That site seems pretty “crackpottish”, maybe because it was translated from French? Still the info there is good.

          Basically, they did the experiment of making these stones to see how difficult it was, and found it was much easier than carving and moving large stones. There is a video of this, btw (https://youtube.com/watch?v=nofW_vREp9E). Then they sent a sample to archeologists without telling them where it came from, apparently the experts couldn’t tell it was artificial limestone.

          All sorts of odd stuff popularly attributed to things like ancient aliens make perfect sense once you know about cast limestone. Not just in eypt but all over.

        • Long ago I heard a guest on a late-night loony radio show argue for that casting theory, and I have assumed ever since that it was crackpot. My prior remains the same for now, since I have seen an authoritative source give credence to the theory.

        • What exactly about it do you disbelieve? The main claims are:

          1) It is possible to make cast limestone that is indistinguishable from carved limestone to archeologists unless they are specifically looking for it.

          2) It would be an order of magnitude easier to cast the blocks instead of carve/drag them?

          3) The pyramid builders knew about this technique.

          4) The pyramid builders used this technique.

          Once I learned about cast limestone, I couldn’t see why anyone would’nt build a pyramid that way. It isn’t like I have tried to do that myself, so 1 or 2 could be wrong… Actually, I did look into casting a block just to see but the smallest amount of flyash I could get was 27 tons (for like $800). That was just way too much flyash.

          Regarding 3, I can’t really check that myself unless I learn to interpret hieroglyphics. However, that link mentions multiple Stele and Frescos that are claimed to describe the process.

          Regarding 4, it seems extremely likely they would use the 10x faster and easier method if they knew about it (assuming 1-3 are true). Also, just watch a show like ancient aliens and keep in mind you can cast stone:

          Scattered about the site are many monolithic stones that are precisely cut and routed in multiple levels with perfect angles and straight lines. Many of the blocks are cut in perfect interlocking shapes–a feat far beyond a primitive civilization that used stone and bronze hand tools. The only explanation for this precision work is the use of machine tools similar to the diamond-tipped quarry saws, masonary drills and CNC routers used today and powered by electricity or internal combustion engines. No archeologist or scientist can offer any other explanation for the amazing existence of Puma Punku’s stone work.

          https://www.theancientaliens.com/puma-punku

          So what we have is a theory that makes an otherwise surprising claim (can’t say prediction since the ancient stones were known about beforehand) that we should find that ancient civilizations were able to create perfectly shaped very large stone blocks. According to Bayes’ rule we should then give it a very high posterior probability. All the other likelihoods in the denominator are small either because they are very vague (ancient aliens) or inconsistent with the evidence (use of precision tools), but if the stones were cast we should expect exactly what we have found.

        • “1) It is possible to make cast limestone that is indistinguishable from carved limestone to archeologists unless they are specifically looking for it.”

          So get a geologist to put a limestone chip under a microscope. Wake me up when that happens and the geologist verifies the theory.

        • So get a geologist to put a limestone chip under a microscope. Wake me up when that happens and the geologist verifies the theory.

          This already happened decades ago. It was one of the first things the guy who came up with the theory did. The geologists and archaeologists couldn’t tell the difference until told what it was. Once you know what to look for you see that none of the fossils are ever cut and the presence of air bubbles. The same is seen in samples from the pyramids. It is all in that link.

          So essentially your argument is that Davidovits et al’s work is all fraud. That is possible of course, but seems a quite extreme default assumption.

          This “peanut gallery” criticism is really annoying by the way. It reminds of when I looked up the pH of LaCroix and discovered a great example of inappropriate application of the argument from authority heuristic. I found the internet was awash in people waiting for a government or corporation to tell them the answer… besides a little kid who actually measures it for himself for under $5 on youtube.

        • Anoneuoid:

          You’ve got a dog that didn’t bark. If this theory were true, you could tell geologists “Here is how you tell a cast block from a quarried block. They are distinctly different. These blocks over here are cast blocks, and those over there are quarried.” Other geologists would then say “Yup, the cast blocks are distinctly different, and there is no evidence that they were quarried.” If this theory were true, I would expect many of these dogs to bark.

          You didn’t mention the Jana article, which looks like a very detailed geological examination of the issue. (The “geologist [putting] a limestone chip under a microscope” I talked about.) Jana did not verify the theory — the dog did not bark. To the contrary, Jana writes that:

          “This study conclusively demonstrates that there is absolutely no evidence of an alkalialuminosilicate-based composition in the binder phases of the casing stones, nor is there any
          evidence of “unusual” constituents in the pristine, bulk uncontaminated interior of the casing
          stones to call for a “man-made” origin. Despite the detection of a man-made “coating” on the
          Lauer casing stone, the stone itself is determined to be nothing but a high-quality natural
          limestone mineralogically, texturally, and microstructurally similar to that found in the
          quarries at Tura-Masara.

          Based on the present scientific proof of the absence of a “geopolymeric” signature or any
          “synthetic” composition in the same Lauer casing stone, originally used as a “smoking gun”
          to support the concrete-pyramid hypothesis, the proposed geopolymer hypotheses of
          Davidovits and others, or any “new” hypothesis for that matter really has no practical
          credibility (let alone their astounding extension to both core and casing blocks, and
          granite/granodiorite/basalt/travertine/quartzite blocks, columns, pavements, and other
          architectural artifacts associated with the Great Pyramids) unless detailed and systematic
          research is done by a diverse group of scientists on actual pyramid samples of known
          provenances. A valid hypothesis must rest upon a reliable set of unquestionable data.

          Despite much reported evidence of the use of zeolitic (geopolymeric) chemistry in the ancient
          technologies, its promising future in the modern cast-stone technology and as innovative
          building materials for sustainable development, there is no evidence of use of geopolymeric
          cement in the pyramid stones. Based on unassailable field evidence in favor of a geologic
          origin for the pyramid stones, and equally convincing results of the present laboratory studies
          confirming the “geologic” origin of the casing stone samples from the Great Pyramid of
          Khufu (originally used as evidence for a man-made origin), the author is convinced that the
          Egyptian pyramids stand as testament to the unprecedented accuracy, craftsmanship, and
          engineering skills of the Old Kingdom (2500 BC) stone masons!”

          You also don’t mention the granite stones over the King’s Chamber, which are huge and everyone agrees were quarried, hauled, and set in the pyramid. So everyone agrees that the pyramid builders did, indeed, use the technique you claim is ten times more difficult and they used it where the cast alternative would be most superior.

          You also didn’t mention that Davidovitz seems to have retreated to claiming only that some upper-level blocks were cast.

        • If this theory were true, you could tell geologists “Here is how you tell a cast block from a quarried block. They are distinctly different. These blocks over here are cast blocks, and those over there are quarried.”

          Yes, this is in fact the case. I already summarized the differences for you:

          Once you know what to look for you see that none of the fossils are ever cut and the presence of air bubbles. The same is seen in samples from the pyramids. It is all in that link.

          So essentially your argument comes down to assuming that is a lie.

      • >> The Egyptians were able to lay out pyramids using surveying techniques that ensured a multi decade pyramid construction came out properly square based. They learned what steepness to set the slope of the pyramids by observing failures of the bearing capacity of the soil from earlier too steep pyramids. All of this occurred more than several thousand years ago

        So people are able to develop convoluted technology just to satisfy irrational beliefs of their leaders.

        Is an argument for or against human rationality?

        • Eh, my impression is that it wasn’t just leaders that believed in a complicated afterlife… but here’s the main point:

          1) Doing something like building a pyramid (or successfully running large farms) requires a long time and a lot of resources.

          2) Planning, surveying, accounting, and calculating makes long term projects of this type much more successful.

          3) In the presence of a need for planning, surveying, accounting, and calculating to reduce long term project risks humans developed whole professions around those tasks thousands of years before anything like modern computing or internet or etc etc.

          4) Therefore, humans have been able to adapt to various long term risks through cultural practices encouraging calculation, checking calculations, planning, record keeping, measurement, etc for at least the last 3000-5000 years.

        • Also note, saying that human society has set up systems that adapt the society to long term risks is different from saying that *each individual human is adapted to long term risks*

          Lots of people aren’t Civil Engineers, but we do have Civil Engineers and they do things like build dams that last hundreds of years and operate without wiping out millions of people (though there are also spectacular failures of dams).

        • Daniel said,

          “4) Therefore, humans have been able to adapt to various long term risks through cultural practices encouraging calculation, checking calculations, planning, record keeping, measurement, etc for at least the last 3000-5000 years.”

          This sounds like too strong a conclusion to draw from (1) – (3). I’d replace (4) with something more like,

          4′) Thus, some human societies throughout the last 3000-5000 years have sometimes been able to adapt to some varieties of long term risks through cultural practices encouraging calculation, checking calculations, planning, record keeping, measurement.

Leave a Reply to elin Cancel reply

Your email address will not be published. Required fields are marked *