Skip to content

Surprise-hacking: “the narrative of blindness and illusion sells, and therefore continues to be the central thesis of popular books written by psychologists and cognitive scientists”

Teppo Felin sends along this article with Mia Felin, Joachim Krueger, and Jan Koenderink on “surprise-hacking,” and writes:

We essentially see surprise-hacking as the upstream, theoretical cousin of p-hacking. Though, surprise-hacking can’t be resolved with replication, more data or preregistration. We use perception and priming research to make these points (linking to Kahneman and priming, Simons and Chabris’s famous gorilla study and its interpretation, etc).

We think surprise-hacking implicates theoretical issues that haven’t meaningfully been touched on – at least in the limited literatures that we are aware of (mostly in cog sci, econ, psych). Though, there are probably related literatures out there (which you are very likely to know) – so I’m curious if you are aware of papers in other domains that deal with this or related issues?

I think the point that Felin et al. are making is that results obtained under conditions of surprise might not generalize to normal conditions. The surprise in the experiment is typically thought of as a mechanism for isolating some phenomenon—part of the design of the experiment—but arguably is it one of the conditions of the experiment as well. Thus, the conclusion of a study conducted under surprise should not be, “People show behavior X,” but rather, “People show behavior X under a condition of surprise.”

Regarding Felin’s question to me: I am not aware of any discussion of this issue in the political science literature, but maybe there’s something out there, or perhaps something related? All I can think of right now is experiments on public opinion and voting, where there is some discussion of relevance of isolated experiments to real-world behavior when people are subject to many influences.

I’ll conclude with a line from Felin et al.’s paper:

The narrative of blindness and illusion sells, and therefore continues to be the central thesis of popular books written by psychologists and cognitive scientists.

I’m reminded of the two modes of reasoning in pop-microeconomics: (1) People are rational and respond to incentives. Behavior that looks irrational is actually completely rational once you think like an economist, or (2) People are irrational and they need economists, with their open minds, to show them how to be rational and efficient.

They get you coming and going, and the common thread is that they know best. The message is that we are all foolish fools and we need the experts’ expertise for life-hacks that will change our lives.

If we step back a bit further, we can associate this with a general approach to social science, or science in general, which is to focus on “puzzles” or anomalies to our existing theories. From a Popperian/Lakatosian perspective, it makes sense to gnaw on puzzles and to study the counterintuitive. The point, though, is that the blindness and illusion is a property of researchers—after all, the point is to investigate phenomena that don’t fit with our scientific models of the world—as of the people being studied. It’s not so much that people are predictably irrational, but that existing scientific theories are wrong in some predictable ways.


  1. Baruch Eitam says:

    As a researcher who studies cognitive ‘blindness’ which i think teaches us quite a bit about key principles of cognition and is only weakly related with ‘suprise’ (and in fact is not a unitary phenomenon) I think the Felin et al paper is another demo of scientific cannibalism in which people who know very little about a phenomenon generalize X’s hype by using a shallow analogy to reach false insights. But yet another “critical” paper is produced (and what do you know — it is even quoted in Gelman’s blog)!
    Good night, and good luck.

    • Andrew says:


      1. If you think there’s something wrong with the Felin et al. paper, maybe you could say what it is, rather than just calling it “cannibalism” that is “shallow” and “false.” In particular, Felin asked for “papers in other domains that deal with this or related issues,” so you could provide some references if you don’t feel like explaining what is exactly wrong with their paper.

      2. I quoted Felin after he emailed me his question, so I don’t really understand what you’re getting at with “what do you know — it is even quoted in Gelman’s blog.” Last year, you emailed me a question, and—what do you know—I quoted you on this blog too!

      • Anonymous says:


        “cannibalism” — the authors feed on colleagues work to promote themselves. “feed on” (as in a parasitic relationship) — professional psychologists (not a random joe) writing about something without putting in the necessary work to understand it or at least properly survey the empirical findings and their interpretations; or worse, putting in the work and then choosing to depict as a caricature so they can latch on to a famous psychologist’s unfortunate quote .

        Substance. the so called ‘blindness’ phenomena began with a fantastic demonstration by Nisser & Becklen (1975) and only many years later popularized by Simons & Chabris who got its importance. The point of this demonstration was the exact opposite of ‘magic’ but rather to (a) provide an ecological demonstration of so called ‘selective attention’ and (b) to argue that there is nothing special (or magical) with such ‘strong selection’. As the authors conclude in the paper’s abstract:

        “It is argued that selective attention does not involve special mechanisms to reject unwanted information, but is a direct consequence of skilled perceiving”

        Scores of experiments performed after Simons and Chabris’ popularization tried to delimit and unpack the ‘blindness’ findings (and other strongly related phenomena like Mack & Rock’s 1998 work as well as that on change ‘blindness’; since at least, French, 1953; see Simons, 2000 for a review) because they seemingly open a window to understanding of the limitations (and some would say functions) of phenomenal awareness (see a very recent special issue trying to make sense of the abundance of data and theoretical positions).

        Beyond the forced lumping together of “perceptual” and “social priming” (is giving task instructions ‘perceptual priming’?) to be able to link the two to poor Kahneman’s quote there is no connections whats or ever between the findings — in fact we have recently shown that (at least participants’) “surprise” per se, is not a key factor in producing the phenomenon (Chen at al., in press).

        So, as you like to remind us “science is hard”, it takes effort and to that I would add “trashing is easy”. but it does also bear upon the ones’ which choose do so.

        And regarding your point #2 — I do ask questions on your blog — do you really see a question here?

        (some) References for those interested in this and related phenomena

        Chen et al. (in press) Expecting the unexpected: Violation of Expectation shifts strategies toward information exploration. Journal of Experimental Psychology: Human Perception and Performance

        French, R.S. (1953). The discrimination of dot patterns as a function of number and average separation of dots. Journal of Experimental Psychology, 46, 1–9.

        Mack, A & Rock, I. (1998). Inattentional blindness. Cambridge, MA: MIT press

        Neisser, U., & Becklen, R. (1975). Selective looking: Attending to visually specified events. Cognitive psychology, 7(4), 480-494.

        Simons, D. J. (2000). Current approaches to change blindness. Visual cognition, 7(1-3), 1-15.

        • Baruch Eitam says:

          Just to be clear, the above was written by me.

        • Andrew says:


          I expect that our positions are closer than they might seem based on the above comments. Regarding the two points:

          1a. I agree that no value is added by empty, irresponsible critiques, whether by insiders or outsiders. But the Felin et al. article seems to me to be thoughtful and interesting. After reading the Felin et al. article, I don’t think the gorilla experiment and others like it were useless; rather, I have a clearer sense of the limitations of those studies.

          1b. Thanks for the references.

          2. I have no problem with you, or Felin, or anyone else asking me questions or making points to me which I can respond to here. I just thought your statement, “what do you know — it is even quoted in Gelman’s blog)!” kinda missed the point. Felin shared some thoughts, asked me a question, and I posted it all here. This was no different than when you shared some thoughts, asked me a question, and I posted that.

          • Baruch Eitam says:

            You are probably correct about me missing the point; may because I have become very sensitive (and maybe even cynical) to some current forms of critique. Anyway, your blog, is valuable in that somehow you do manage to maintain sensible discussion. I will try to curb my own hotheaded self.

        • Terry says:

          Excellent post. Thanks. The phrase “skilled perception” is very helpful.

          Humans perceive the world in certain specific ways and it is incorrect to think they always perceive everything they see and hear. It therefore is not surprising at all to show that they miss various things. The surprise, therefor is relative to a straw man.

    • Andrew says:

      I’d also like to push back against the idea that criticism by outsiders (“people who know very little about a phenomenon”) is “scientific cannibalism.” And I’d also like to push back against the idea that being “critical” deserves scare quotes.

      Criticism is the essence of science. It is true that some ideas are so technical that they are not easily subject to criticism by outsiders, but if that’s the case such ideas should be labeled as such. If research results are going to be published in general-interest journals and promoted to general audiences via NPR etc., then they should be open to criticism.

      • Martha (Smith) says:

        Indeed, criticism from outside a field is often needed to get researchers in that field out of their “That’s the way we’ve always done it” mindset.

      • Baruch says:

        but this is a strawman…I myself am an outsider to the field of cognitive ‘blindness’ — the point is that you have to invest the work so you are able to depict the object of your critique with minimal professionalism and decency.

        • teppo says:

          Our goal certainly isn’t to be outside the norms of minimal professionalism and decency. We try to stick to the arguments. The claims from perception and priming studies are generalized extremely broadly (and publicly) to argue that humans a) are blind to the obvious (obviousness is a tricky concept) and that they b) suffer from illusions/delusions and lack mental depth. Those points are the explicit thesis of a number of books by scholars in this space, including Kahneman’s book (specifically see the extensive discussion of priming and various perception studies), as well as Chabris-Simons’ book Invisible Gorilla and Other Ways Our Intuitions Deceive Us. A more recent example is Nick Chater’s (2018, Yale UP) The Mind is Flat: The Illusion of Mental Depth and the Improvised Mind (the US subtitle of the book is “The Remarkable Shallowness of the Improvising Brain”).

          There’s an alternative approach to perception and mind that doesn’t focus on blindness, illusion and delusion. For anyone interested (forewarning: it’s hopelessly long, best for curing insomnia), we extensively discuss and debate some of these issues—with responses/comments by Nick Chater, Gerd Gigerenzer, David Funder, etc—in a recent piece in Psychonomic Bulletin & Review:

          • Curious says:

            One of the problems commonly observed in psychology is that there are these camps who believe their approach is THE approach that explains the reality of how the brain/mind functions, when reality is that both camps provide information that is true and neither explains it all.

            It is a fact that cognitive illusions can be easily replicated. It is also a fact that we see reality with enough clarity to function daily such that we have been able to create exceedingly complex systems of economics and governance.

            Either camp denying the other would simply be wrong. A defensible theory must be able to explain and predict both.

            • teppo says:

              Curious –

              I do think there is an actual (and not just rhetorical) difference between the two camps.

              Namely, the fact that humans don’t see something (and thus are blind) is only something to be explained if our theory says we should see it. Inverse optics and psychophysics focuses on perception as a function of the actual nature of stimuli (what Tversky and Kahneman call ‘natural assessments’ – size, similarity, surprisingness, etc: see Kahneman, AER/AP, 2003). This represents a world-to-mind, camera-type view of perception that allows one to show different forms of blindness against a scientist-specified or omniscient ideal (in vision research called ‘ideal observer’). But blindness isn’t really an explanation if we instead take a more organism-centric, mind-to-world, and directed view of perception that says that perceptual obviousness isn’t as obvious as we thought, and certainly omniscience isn’t possible/useful. Thus so-called blindness is merely an artefact of experimental staging and priming, a function of the distractions and misdirection – and can be explained without reference to blindness or illusion.

              • Curious says:


                I do not follow your reasoning.

                Peripheral blind spots exist in the same world that accurately perception of what is in the corner of the room when we turn our head toward it.

              • Curious says:

                edit typo:


                I do not follow your reasoning.

                Peripheral blind spots exist in the same world that accurate perception of what is in the corner of the room when we turn our head toward it.

              • teppo says:

                Curious –

                Sure, another way to put this point is that we are blind to an indefinite number of things. Therefore, why even engage in the exercise of pointing what we miss (visual scenes teem with any number of things we could see, or miss)? A toad/frog will not see/recognize a cricket right in front of it, unless it moves. That’s not blindness. An organism’s Suchbild (search or seek image) guides awareness toward certain, salient things.

                [Sorry for putting my comment here: there isn’t a “reply to this comment”-option beneath your comment below.]

  2. Cbergenholtz says:

    Something related and empirical, in political science:

    “Survey experimentalists, how worried should we be about experimenter demand effects (EDEs), i.e. participants inferring hypotheses and responding artificially to help confirm them? New paper w/ @petersonerikj shows the risk of EDEs is minimal. [thread]

  3. Dale Lehman says:

    I’m not a psychologist so perhaps the insights from the article evade me – more likely, I just wasn’t so interested in them. But the key example – the invisible gorilla study – does interest me, and I think the article makes much ado about nothing (or little). I think the invisible gorilla, while a contrived experiment, is pertinent to many applications. Marketers, in particular, are well aware of how easily we can fail to see what is in front of us. They do put a lot of effort into setting up “experimental” conditions where the surprise happens. So, I think the research is capturing an important aspect of reality. The Felin et al paper appears to be questioning the relevance of these findings for understanding how the mind works. Perhaps they are right. But I think a fairer complaint would be that the results of such surprise experiments should not be overstated. There are circumstances where the contrived experimental conditions do mimic real circumstances. Undoubtedly there are also circumstances where they do not. The paper would be better if it clarified this point.

    For me, the invisible gorilla shows how unprepared and unsuspecting people can fail to see what is in front of them – when effort has gone into creating the surprise situation. It may not say anything about how we behave when we are well prepared or when we are alerted to the situation. But it does say something about many situations that are real enough to matter.

    • Andrew says:


      I assume that Felin et al. would agree that the gorilla experiment is very relevant to real-world situations involving deception, and that they’re questioning the relevance of this experiment to real-world situations that don’t involve deception. I agree with you that deception happens a lot in the real world and is worth studying in its own right.

    • teppo says:

      Dale – Yes, you are right: one issue is that surprise-hacked findings are overstated (particularly in the books that are written by scientists in their wake: e.g., Thinking, Fast and Slow). Various forms of these findings are generalized to say things like: “virtually no realm of human behavior is untouched by everyday illusions,” “humans are blind to the obvious,” “the mind is flat,” we have an “illusion of mental depth,” etc. It’s a popular theme. Though, despite all of this blindness, illusion, bias and delusion – humans seem to do just fine in their everyday lives.

      But beyond overstatement, we think the issue is deeper. The concern is the emphasis on negative, surprising findings – and how we get to these findings, the methods used to construct them, and the a priori theories that motivate us. To illustrate, in reaction to the surprise induced by findings from priming research (discussed in Kahneman’s book and open letter to Nature), Kahneman says “Disbelief is not an option. The results are not made up…” We think that the results are, in essence, made up. Not of course in any kind of sinister sense (and importantly, not necessarily data-dredged either). Namely, they are “made up” in the sense that the experiments are staged to ensure the blind outcome. And as we note in the piece: magic provides just the tools for demonstrating blindness and illusion: various forms of misdirection—whether subtle and passive or overt and active—like distraction with irrelevant stimuli or tasks, subtle behavioral cues (e.g., pointing), delay and memory, varied types of priming and (incidental) manipulation of situations and ambient environments, misattribution and automaticity, verbal and non-verbal suggestion, etc. [Incidentally: very similar things are done with visual illusions, as discussed and shown in the work of Brian Rogers and Oliver Braddick.]

      But just as magic can be explained, so can surprised-hacked findings related to perception and priming. And it turns out that these explanations are a lot more mundane and boring. Though, these explanations also reveal something different about perception and the mind, something different than what is suggested by pointing to x type of blindness or y type of illusion. They tell us about the ex ante factors that direct perceptual awareness (manipulated in the lab, by asking questions, primes, expectations, tasks, etc) – the role of organism/species-specific factors, and in the case of humans: the role of theories, questions, problems and expectations. But that’s an altogether different approach to perception, mind and reality – altogether different from the one assumed by those who engage in different forms of surprise-hacking.

      • Martha (Smith) says:

        “Various forms of these findings are generalized to say things like: “virtually no realm of human behavior is untouched by everyday illusions,” “humans are blind to the obvious,” “the mind is flat,” we have an “illusion of mental depth,” etc. It’s a popular theme.”

        A problem I have with such statements is that they are stated as properties of “humans”, “the mind”, “we”, etc. — they ignore between-person differences and between-circumstance differences; i.e., they ignore variability.

        • Kyle C says:

          +1 This encapsulates much of what I have learned from this blog over the years. Also — such statements ignore the fact that the participants in many (not all) of the most famous studies were students taking psychology at R1 universities. A theory of generalizability from them to “people” is possible but never spelled out. This especially bugs (for some reason) when papers distinguish between “men” and “women” based on studies of 18-21 year olds, who are of course men and women of a sort, but … really.

  4. teppo says:

    A quick addition: I think another thing that contributes to surprise-hacking is the Murray Davis-type demand for research (see the 1971 article) which meets the “that’s interesting!” criteria. The “that’s interesting!” article and similar intuition is often highlighted in doctoral training and seminars. The argument, in short, is that the best research needs to be unexpected, and it needs to surprise audiences. Some truth there. But the unintended consequence is all the pre-experimental planning and staging (and post-experimental data dredging) of surprises, leading to results that often are conjured in magic-like fashion – where the surprise is quite shallow (once you know the trick), or where the effects can be interpreted wildly differently (depending on one’s theory), or where the effects turn out to be ephemeral and/or just constructed from random noise.

    The emphasis on “that’s interesting!” can lead to fishing expeditions that feel theoretically shallow, and, more often than not, some kind of negative storyline about humans. As my co-author Joachim has put it (for more, see Krueger and Funder, 2004, BBS), this is all part of the “mistakes are fun” school of psychology (and cog sci, economics). And if one’s theoretical prior is all about looking for mistakes, bias and blindness, then one is likely to stage and find evidence accordingly. Of course, some of this is driven by sub-disciplinary norms and methodological practices (whether deception is allowed in experiments, etc). But the present cross-disciplinary excitement and generalizations (and ex ante assumption) of blindness and illusion are, I think, skewing the types of experiments that are done, what is reported to broader audiences (including popular books by scholars) and hampering our understanding of human nature.

  5. Terry says:

    I’m rather favorably inclined to the gorilla experiment. It makes us think about something we already know but often ignore. It is, therefore, simultaneously surprising and unsurprising.

    The shock of seeing the gorilla on a second viewing is genuine and makes clear that we don’t pay attention to a lot of the things around us; it makes very clear how much of the world we filter out. This is useful because it shakes us out of the naive view that we process everything we see and hear.

    Yet, we already knew this so it isn’t really surprising. We often zone out and miss things that are happening. We drive somewhere and have no memory of the drive. We don’t hear people who call to us in a public place if we don’t know they are there.

    So the bottom line of the gorilla experiment is that YOU DON’T SEE THINGS YOU DON’T LOOK AT. This is not a surprise at all when you think about it. Yet, the experiment is helpful because it makes clear how powerful this filter is. Subjects did not see something that very close to what they were focused on and was quite surprising once they paid attention to it.

  6. Terry says:

    The gorilla experiment depends on another limit to human perception that makes the results much less striking. Humans, almost literally, do not see things that are not in the center of their vision.

    To see this, focus intently on the center of a line of text. Without shifting your eyes, i.e., using only your peripheral vision, try to read the left and right ends of the line. You can’t do it. You can vaguely see that something is there, but you can’t actually read the text. Almost literally, you cannot see the text on the left and right.

    The experiment makes subjects focus on a point that is not the gorilla. Therefore, subjects who actually follow directions should not be able to see the gorilla. They may see there is a figure there, but there are a lot of human-sized figures, and you have to actually look at them to see that one is not a human. The results, therefore, are not all that surprising. Note also that the gorilla is the same size and shape as a human and walks like a human, not a gorilla. If it had been an actual gorilla that looked and acted like a real gorilla, I’d bet a lot more subjects would have seen it. Humans are very good at picking out unusual things in their peripheral vision, things like sabre-toothed tigers, snakes, and dangerous gorillas.

  7. Martha (Smith) says:

    Good point in last paragraph, but the first fails to take into account the fact that folks with macular degeneration (loss of central vision) can cope to some extend without central vision.

    • Terry says:

      I didn’t know that. Thanks.

      Can they learn to read? Is this a case where the brain learns to pay more attention to information that it used to not pay much attention to? Or is the visual acuity simply not there and sufferers can never develop a clear enough image of the periphery to distinguish letters?

      I can see that if you don’t have the center of your vision available you would pay a lot more attention to what is on the periphery, even if it is fuzzy, if for no other reason than you are not distracted by what is in the center of your vision. Fuzzy infor is better than no info.

      • Martha (Smith) says:

        Macular degeneration is (as the name suggests) a condition that develops in later life, so it’s not a matter of “learning to read”, so much as adapting to the difficulties of reading with MD. There are various products available to aid people with the condition (magnifiers, or projection devices that allow people to use peripheral vision to read). Or some people who have it in just one eye can close that eye and read just with the other (or learn to ignore the bad eye.) When my mom developed it, I found that using a photocopier to enlarge music enabled her to resume playing the piano. (However, I think it would be difficult for someone with macular degeneration who had no previous musical training to learn to read music.}

        Also, something that dawned in me in my exercise class today (while watching the instructor in my peripheral vision) is that peripheral vision may be something that is best adapted to noticing anomalies — especially signs of danger that are not directly in front but to the side. Also, I suspect that the best teachers often have developed their peripheral vision to keep aware of feedback from students at the sides of the classroom. [Then there’s the old joke that mothers (or teachers) have eyes in the back of their heads — said when they can spot mischief that the kid engaging it thinks is out of their range of vision. ]

  8. teppo says:

    “The message is that we are all foolish fools and we need the experts’ expertise for life-hacks that will change our lives.”

    “It’s not so much that people are predictably irrational, but that existing scientific theories are wrong in some predictable ways.”

    Andrew –

    I think the above points from your post are important. The problem is that we look for (or stage) counter-intuitive, negative and surprising things, and then confidently conclude that people are irrational and stupid. Of course, in some cases this is true. But the insistence that these findings are THE truth (“disbelief is not an option”) puts the scientist in a god-like, omniscient position of expertise. And now it of course turns out that many of the surprise-hacked findings are wrong, ephemeral, or not reproducible. Or in other cases the same data (with a different theory) can be interpreted/discussed completely differently. This type of argumentation is quickly dismissed as “Panglossian” (a google scholar search of that term + Kahneman and/or Stanovich will point interested readers in the right direction). But at this stage the problems with replication and issues with theory are suggesting otherwise.

    To link this to your earlier “hot hand” post, this quote captures this intuition: “the hot hand is a massive and widespread cognitive illusions” (quote used by Miller and Sanjurjo in their Conversation piece). And the longer context of the quote is illustrative of the above points:

    “Amos and his students Tom Gilovich and Robert Vallone caused a stir with their study of misperceptions of randomness in basketball. The “fact” that players occasionally acquire a hot hand is generally accepted by players, coaches, and fans. The inference is irresistible: a player sinks three or four baskets in a row and you cannot help forming the causal judgment that this player is now hot, with a temporarily increased propensity to score. Players on both teams adapt to this judgment—teammates are more likely to pass to the hot scorer and the defense is more likely to doubleteam. Analysis of thousands of sequences of shots led to a disappointing conclusion: there is no such thing as a hot hand in professional basketball, either in shooting from the field or scoring from the foul line. Of course, some players are more accurate than others, but the sequence of successes and missed shots satisfies all tests of randomness. The hot hand is entirely in the eye of the beholders, who are consistently too quick to perceive order and causality in randomness. The hot hand is a massive and widespread cognitive illusion.

    The public reaction to this research is part of the story. The finding was picked up by the press because of its surprising conclusion, and the general response was disbelief. When the celebrated coach of the Boston Celtics, Red Auerbach, heard of Gilovich and his study, he responded, “Who is this guy? So he makes a study. I couldn’t care less.” The tendency to see patterns in randomness is overwhelming—certainly more impressive than a guy making a study.

    The illusion of pattern affects our lives in many ways off the basketball court.”

    OF COURSE, in retrospect it’s easy to point to problems. But the issue, as you’ve noted, is the confidence with which claims are made. And I think that the ex ante staging of surprises (looking and priming for the negative) is the problematic feature of much of this type of research. It’s sold as giving us fundamental insights, but I think it’s taking the oxygen from good work. The problem won’t be solved with further replication, more data or pre-registration. It’s about having better theories. To use a cliched (sorry) quote from Einstein: “whether you can observe a thing or not depends on the theory which you use.”

  9. Another problem is when researchers stage surprises without acknowledging that they are doing so. I am thinking of experiments of this type (often found in growth mindset research, but not limited to it by any means): Subjects are given a set of problems to solve. All are told that they did really well–that is, they got 80 percent correct. Subjects in group A are then praised for being smart, and those in group B are praised for working hard. Then they are given a new set of problems; this time, they are told that they did “much worse” than before. (Etc.) What happens if the subjects’ own sense of their performance conflicts with what they are being told? Might that not affect their responses to various questions about their motivation, etc.? For example, if I find the first problem set easy to the point of triviality, and am then told I got 80 percent right and must be “really smart,” wouldn’t I begin to doubt the entire process? Isn’t it possible that I would lose some motivation–not because being called “smart” takes the wind out of my sails, but because I can see that something here is either wrong or rigged? That kind of question needs more consideration. I bring it up in my book Mind over Memes, in the chapter on growth mindset, but focus more on the term itself.

  10. Charles Carter says:

    Engaging back and forth in the comments. I’m no expert but found Kahneman’s book fascinating, eye-opening and enjoyable. It finally gelled for me an understanding of heuristics.
    So I have a certain bias coming into this debate. Yet the pejorative language, e.g. “hacking” suggests Felin’s bias; or a conclusion expressed through indirect language prompts. Perhaps they expect readers are blind to such language. They complain that experimental scenarios do not reflect the real world. Is that actually limited to blindness and cognitive science studies? And they imply researchers are less honest than stage magicians. I just don’t see active perception and attention blindness as mutually exclusive, as the authors suggest. My sense is that much of the research is in pursuit of limitations on those heuristics that serve us very well the vast majority of the time. A great deal of criticism of current scientific literature is that only the new, the breakthroughs, the surprises, get published. Are these studies somehow worse?

    • Kyle C says:

      Charles —

      “My sense is that much of the research is in pursuit of limitations on those heuristics that serve us very well the vast majority of the time.”

      If, however, you read (good) blogs on applying behavioral economics to investing, you will find that this is NOT how those commentators view this research. Finance types constantly repeat that we are jumped-up monkeys just down from the trees who developed a lot of dysfunctional habits when we evolved hunting, gathering, and evading big predators on the savannahs, etc.

      Now it is possible to read such commentary to mean that modern finance just happens to be an area in which our inherited heuristics do not serve us well most of the time, but this is not the overall message IMO.

      And of course what the behavioral heuristic researchers originally “meant” their research to show is yet another question.

  11. teppo says:

    Charles – yes, the priming and perception studies (and examples) discussed by Kahneman and others are certainly interesting – and influential and worth further discussion. It’s the experimental setup and associated interpretations of this work that raises questions for us (along with replicability, discussed by others). And the accumulation of problems raises broader concerns about the central thesis of blindness and delusion. Though, obviously/understandably many will disagree with this (there’s a related debate/discussion of the interpretation of visual/optical illusions as well: Müller-Lyer, Ponzo, etc).

    In terms of your point about “heuristics that serve us very well the vast majority of the time” – yes, that has been the program of research by Gerd Gigerenzer and many others But those two camps—Kahneman versus Gigerenzer—have historically not seen eye-to-eye, at all (though that is changing, depending on what sub-disciplinary circle one is a part of).

    There are some quite important, “inside baseball”-ish differences, and similarities, in the underlying assumptions about perception and optics, assumed by the respective groups (e.g., see Gigerenzer and Gaissmaier’s [ARP, 2013] focus on cues or Geisler’s ideal observer or Kahneman’s natural assessments, Bayesian approaches to perception, etc). We struggle with approaches that place an emphasis on the nature of stimuli (world-to-mind), and focus instead on organism or species-specific factors. So, awareness is driven by such things as Suchbild, theory-dependence, questions, problems – and these can also be mapped onto the tasks, questions and primes in experimental settings. Put differently: salience is driven by mind/organism rather than environment-specific factors. These assumptions about perception turn out to be extremely important, as visual examples (and experiments) are heavily relied and built upon, carrying over from perception work to areas such as judgment and decision-making.

    Finally, in terms of the practice of hacking (and the “hacking” language, as you note) – I suppose whether it is seen as problematic and pejorative depends in part on sub-disciplinary norms and methodological conventions, for example, depending on whether deception is allowed in experimentation. There’s important experimental preparation that goes into these studies (as there should be), and this is all informed by fundamental axioms (and theoretical priors) about what is worth studying, designing and looking for (and how the findings are interpreted and generalized) – and it is at this level that we (and others) might disagree and have concerns.

    Apologies for the long response!

    • Baruch says:

      You continue to give highly abstract responses to rather concrete comments but I haven’t yet seen, in all your (rather intensive) responding a single argument substantiating your claim the “blindness studies” are any form of “hacking” (see the specific issues i raised above).
      You also continue to insist on focusing on popular depictions of these phenomena (and hey, people like to be heard, even be famous; although not my personal style or ability it is not a crime yet) and shifting between two very different research programs that have very weak connections between them (if any).
      Regarding cognitive ‘blindness’ — you still don’t really deal with the actual research, both past (Neisser’s) or recent, see this for example:
      If you do have a peak you will find that it is an elaborate research program with theoretical questions and of course problems; ones that are not significantly different from other research programs in psychology.
      your insistence on not dealing with the actual work and continuation of focus on an imagined or stipulated compound of work (i.e. the “people are blind and dumb” paradigm) suggests an ideological or polemical position that is less interested in understanding or actually influencing the state of affairs.

      • teppo says:

        Baruch: Right – I doubt these debates (in a comments section of a blog post) meaningfully convince those who might disagree with our angle. I totally get that.

        In terms of a concrete example, it seems the gorilla one is an extremely concrete and commonly-known/discussed one, and thus useful. (We can readily work through variants, antecedents and other examples of blindness.) So sticking with that example, for now: is it surprising we miss the gorilla? Yes, if the assumption is that we should see it. Then we should correct for the blindness. And are we blind to the obvious? Yes, if we can a priori specify perceptual obviousness and blindness – which doesn’t seem feasible, from our perspective (unless obviousness and blindness are specified by the experimenter, and/or they are an artifact of the setup of the experiment). We think otherwise, on both points, and thus raise questions about what is perceptually surprising and obvious. That’s the heart of the matter.

        Much more to be said. The overly wordy responses above (sorry, I concur) try to get at some of that (salience, inverse optics and psychophysics, etc) – quite important issues. Social media probably isn’t the ideal place to work through all this. In terms of the 2018 Royal Society special issue – I look forward to reading that more carefully (the papers I [very!] quickly skimmed through anchored quite a bit on a Block-type approach [related to Burge] to perception, but I didn’t look through all the papers).

        There are additional examples in the two 2017/2018 Psychonomic Bulletin & Review pieces and responses: discussion of situation perception (Funder), perception and axioms of rationality (Stanovich), cues and perception (Gigerenzer), some links to Bayesian angles to perception, etc. For more, also definitely worth looking at Jan Koenderink’s extensive work (e.g., see his 2012 Journal of Physiology piece). Again, we certainly don’t expect everyone will buy into this. There are of course major debates already in the space, for example, how Kahneman and Gigerenzer see this: natural assessment versus cue-based approaches (which have intriguing commonalities). Much more to be wrestled with. There I go again – wordy and too long.

  12. Brad Wyble says:

    The use of the Gorilla should not blind us to the importance and relevance of this phenomenon. Simons & Chabris used this example of deception to make the point clear, and it’s very effective in that respect.

    Findings like the invisible gorilla have real-world implications far outside the domain of surprise and deception. For example, they’re highly relevant to driving, in which people can tune out very important information, even though they are watching the road. This leads to accidents when people are performing an internal, cognitively demanding task such as communicating with a car’s voice-activate audio system. David Strayer has done a lot of work on this. Car manufacturers are creating voice-operated menu systems that are incredibly confusing and people are literally dying as a result (in addition to the texting-while-driving epidemic).

    Here’s another very relevant example, in which the gorilla finding was used to explain how a police officer would have missed something that we otherwise might have considered to be “unmissable”:

Leave a Reply