When are cognitive forcing functions interesting?

This is Jessica. I’ve been noticing recently how across different decision domains related to people interacting with tech, it’s common to find researchers proposing solutions to bad decision making that involve some kind of cognitive forcing function. Examples that come to mind include sharing misinformation on social media, where asking people to stop and think about the accuracy of a news story headline seems to reduce their tendency to spread misinformation; making decisions with the help of an AI, where cognitive forcing functions reduce overreliance on the AI; and security-related decisions, where people need to be nudged off autopilot to not put themselves at risk. 

One thing I find interesting is that my knee jerk reaction when I learn about research like this, is to feel kind of disappointed. Really, I want to ask, all you have to do is interrupt people to improve their decisions? I find this apathetic reaction interesting because often the problems that are solved by cognitive forcing functions actually are important, as in the examples above. And the work may be rigorously done. So it feels unfair to immediately dismiss wanting to think about them or work on them. 

One reason I suspect behind my feeling disappointed by forcing functions is that when the solution is to stop and make someone think, the problem ceases to seem exciting because it no longer seems hard. The solution seems easy, and so the problem then seems trivial post-hoc. But this reaction is a little silly, because topics like misinformation and appropriate trust in predictive models for decision-making are hard problems on a societal level. I guess feeling disappointed might stem from some kind of internalized bias from being in a discipline where hard problems and technical cleverness in solutions are both seen as essential to good research.

At the same time though, I do think there can also be an air of “small nudge, big effect” that warrants being a little cautious around arguments about the effectiveness of simple cognitive forcing functions. When the nudge of thinking deliberately for a moment is suggested to be the key finding, some of the nuance due to people having different reasons for being thoughtless or reacting differentially to interventions can get lost, for example. While automatic versus deliberate reasoning distinction might help explain a lot of bad decisions in the world, it seems like a limited lesson for inspiring further work, just like distinguishing “night science” and “day science” is of limited helpfulness. The harder problems are likely to be those where people are reasoning deliberately but with certain beliefs or intuitions guiding them as if on autopilot, and the more interesting solutions might be the ones that propose well-thought constraints or procedures without trying to stop entirely more reactive or intuitive behavior. 

There’s also perhaps a hesitation based on what these kinds of results suggest something about the questions being asked in that research area. If stopping a person to make them think prevents them from making dumb decisions, then it seems we were wrong to assume people were intentional or deliberate in that domain in the first place. One could take away from this kind of work that many of the hard problems related to people misinterpreting information might just fade away if we assume that people are zombie-like by default, smashing their hands around on keyboards when something shiny catches their eye on social media, but capable of being jolted with some actual intelligence when interrupted by a pop-up. The problem space starts to seem less interesting, as if we’ve upper bounded how well some more complicated solution aimed at improving people’s decisions might do.   

But all that said, it does seem plausible that if you’re trying to find the best way to improve decisions related to misinformation, AI-advised decision making, usable privacy and security, etc., the effect size of asking someone to simply stop and think for a moment is large relative to other things you could try. And if it’s newsworthy that you can solve some data-driven decision-making problem by forcing a person to stop and read a couple words, the more reasonable conclusion is probably that we’re in low hanging fruit territory and more, not less, work is warranted. Plus it definitely seems better to be straightforward about what seems to cause an improvement to decisions, even if it’s as simple as interrupting them in some way, rather than pretending it was something else. For example, better to recognize that interrupting someone’s decision process played a big role, and not some fancy new AI explanation or uncertainty visualization that you created! So it seems hard to reconcile the potentially conflicting conclusions one could draw from these kinds of solutions. 

10 thoughts on “When are cognitive forcing functions interesting?

  1. Stop and think may simply be a reasonable first step – it is far from a solution, I expect. I find the System1/System 2 approach to decision making fairly compelling. System 1 is automatic and generally in charge. Stop and think is a way to interrupt this automatic response. This can engage System 2 to try to reach a better decision. But it may (arguably) be necessary, but almost certainly is not sufficient to result in improved decisions. This is why I think most nudges are overrated – it may not be easy to improve the decisions, especially to get large improvements. Cognitive forcing may simply be a first step, but many more steps may be necessary.

  2. It occurs to me that what I wrote implies that demonstrating the power of a cognitive forcing function is more useful if thought of as a critique of research in some domain than a standalone result. The effect from the cognitive forcing function estimates how much variance paying attention itself explains, which becomes a baseline to compare against the effects of more specific ways of guiding attention. It could be used to provide context for interpretation whenever people are proposing more complicated solutions to improving human decisions.

  3. Jessica:

    Interesting discussion. I wonder if something else is going on, which is that these easy interventions are not always so easy! “All you have to do is interrupt people” sounds like nothing, but we go through our days not interrupting ourselves. I remember years ago driving to work one morning, pulling into the parking lot and suddenly realizing that I’d been essentially asleep the whole time for the entire 45-minute drive! Not literally asleep, but I hadn’t been paying any attention to anything. I’d been on autopilot. That was scary.

    Another example of good advice is to never yell at or chew out your kids (aside from urgent safety warnings). I think this is good advice, but it can be hard to do. Screaming at kids just comes naturally sometimes. So I guess that part of understanding these sorts of interventions is to think a bit about why they’re needed in the first place, why people aren’t doing the more optimal behavior already.

  4. If AI can recognize misinformation, then why doesn’t it just not show you that article in the first place. Is this an example of what you are talking about?

  5. “the effect size of asking someone to simply stop and think for a moment is large relative to other things you could try.”

    I think it depends on what kind of information we are dealing with. Someone who is trying to persuade me of the alleged fact, will try to at least appeal to a sense of rationality. Disinformation depends upon this appeal. “Can’t we at least raise a question about X, one study showed that X was extremely dangerous. Why aren’t people even talking about that study.” This formulation is very common. Usually, the study in question is bogus or none existent or the intrepretation is completely unjustified, but the formulation is effective exactly because it appeals to the listeners desire to be rational and not ignore inconvient facts. And, in that manner the false fact is slipped into peoples mind. All Alex Jones or Tucker Carlson are ever doing is “raising questions.” Asking people to stop and think won’t be effective against that type of disinformation.

  6. Attempts to stop misinformation will work just as well as trying to stop a respiratory virus or climate change. These are things humanity will be dealing with forever, because every solution is worse than the problem.

    Instead of trying to stop misinformation people should be taught how to reason in a world where nearly everything they read/hear is incorrect to some extent. Then you get a robust diversity of decisions being made instead of a fragile monoculture of thought.

    This amounts to populating the denominator of Bayes’ theorem with all the possible explanations for what each individual observes, with both priors and likelihoods differing (sometimes greatly) between people who have had different experiences.

    • Anon:

      I disagree. I mean, sure, I agree that these problems will always be with us, but I disagree that it’s not good to try to stop them.

      You write, “Instead of trying to stop misinformation people should be taught how to reason in a world where nearly everything they read/hear is incorrect to some extent.” Sure, but one way to teach people how to do this reasoning is to actively stop misinformation.

      What you say in your comment reminds me of when people say that science is self-correction. My reaction is: Sure, science is self-correcting only because individual scientists do the correction. Public critics such as Uri Simonsohn, or debunkers of disinformation such as Eggers and Grimmer are a key part of the education process. I understand that everyone’s busy, but when people can take the time to demonstrate skepticism, that has value.

      • Your examples show people dealing with misinformation as it comes, rather than trying to stop the production/sharing of it.

        I agree with this approach. I would add that multiple independent reports is the best way, especially if they come from people/groups who are incentivized to compete with each other.

        Regarding research in particular, currently independent replication is actively discouraged except for a few exceptional projects. And all those projects show the vast majority of the research findings cannot be reproduced.

      • > Instead of trying to stop misinformation people should be taught how to reason in a world where nearly everything they read/hear is incorrect to some extent.

        At the same time the tenet of Russian information warfare since time immemorial is to attack the very concept of objective information itself. The endgame is to achieve chaos and mutual distrust in the case of external propaganda consumers, and induce apathy in case of internal consumers.

        I guess the above can be reconciled somehow, but it’s something to be aware of: both “I trust everything” and “I trust nothing” are good outcomes for propagandists.

Leave a Reply

Your email address will not be published. Required fields are marked *