Kahan: “On the Sources of Ordinary Science Knowledge and Ignorance”

Dan Kahan points me to this paper:

It is impossible to make sense of persistent controversy over certain forms of decision-relevant science without understanding what happens in the vastly greater number of cases in which members of the public converge on the best available evidence without misadventure. In order to live well—or just to live, period—individuals must make use of much more scientific information than any (including a scientist) is in a position to comprehend or verify for him- or herself. They achieve this feat not by acquiring even a rudimentary level of expertise in any of the myriad forms of science essential to their well-being but rather by becoming experts at recognizing what science knows—at identifying who knows what about what, at distinguishing the currency of genuine scientific understanding from the multiplicity of counterfeit alternatives. Their rational recognition of valid science, moreover, is guided by recourse to cues that pervade their everyday interactions with other non-experts, whose own behavior convincingly vouches for the reliability of whatever scientific knowledge their own actions depend on. Cases of persistent controversy over decision-relevance science don’t stem from defects in public science comprehension; they are not a result of the failure of scientists to clearly communicate their own technical knowledge; nor are they convincingly attributable to orchestrated deception, as treacherous as such behavior genuinely is. Rather such disputes are a consequence of one or another form of disruption to the system of conventions that normally enable individuals to recognize valid science despite their inability to understand it. To preempt such disruptions and to repair them when they occur, science must form a complete understanding of the ordinary processes of science recognition, and democratic societies must organize themselves to use what science knows about how ordinary members of the public come to recognize what is known to science.

1. The perils of ignoring the denominator

2. Four false starts

3. Four theses on ordinary science knowledge

4. Understanding and protecting the science communication environment

I don’t have anything to add right now but I wanted to share it with you because I think it’s important.

P.S. We most recently encountered Kahan last month in MAPKIA 2.

14 thoughts on “Kahan: “On the Sources of Ordinary Science Knowledge and Ignorance”

  1. > Rather such disputes are a consequence of one or another form of disruption to the system of conventions that normally enable individuals to recognize valid science despite their inability to understand it.

    Well said. I was frustrated with the extent to which any economic analysis was dismissed in the run up to the Brexit vote. People I talked to pointed to the failure to predict the 2008 crisis to explain why all economic policy analysis can be safely disregarded, and that they think economists, along with politicians and other elites, are beholden to multinationals. Based on that I think the following are important general causes of science disbelief amongst those that don’t understand the science (which is all of us for most fields:

    – the community of scientists in question is perceived to pursue a politics related agenda
    – the field in question lacks easy to understand, impressive predictive successes

  2. I think the idea of understanding how individuals in a culture understand and use science is good and important. I disagree with most of the claims in the quoted piece, though.

    “They achieve this feat not by acquiring even a rudimentary level of expertise in any of the myriad forms of science essential to their well-being but rather by becoming experts at recognizing what science knows”

    I think – and I think it’s abundantly clear – that most people don’t do this. Rather, they form folk theories, based on what they were taught in school, strongly influenced by the culture they are immersed in, and modified by cognitive biases and illusions. The only “scientific” part of this is the schooling part, if they are lucky to have had some good classes.

    The folk theories include informal notions of what “science” is and how it works.

    An important driving factor is that people will always construct an explanation for anything and everything. It’s what people do. The thing being explained has to fit into a good story. Good scientists are able to avoid putting too much stock in their own explanations until they are able to find some good support for them, and are fairly rigorous about evaluating that support.

    “science must form a complete understanding of the ordinary processes of science recognition…” There is no organization “science” that must or will do anything. There are only scientists and organizations that fund or support their work. And if we wait until there is a “complete” understanding of a complex psychological and cultural situation, we will still be waiting at the heat death of the universe. We don’t yet have a “complete” understanding of *anything* if you get right down to it.

  3. To continue my remarks above, I comment on this fragment:

    “In order to live well—or just to live, period—individuals must make use of much more scientific information than any (including a scientist) is in a position to comprehend or verify for him- or herself. ”

    This is just not how most people work. It’s like saying that a baseball player has to understand physics and physiology. Or that a cell phone user has to understand the scientific principles of radio and computers. No! People usually learn *how to use* things (including social and cultural things), not to “understand” them. Or rather, they understand them at the level of their folk theories; once satisfied that they have an understanding that fits onto their overall story about the world, they are satisfied and don’t need to put more effort into a more “scientific” understanding.

    Which has real benefits, because it takes a lot of time and energy to go beyond the folk theories, and we have so many, many things that could explored if there was a real need. No one has enough energy for that!

    • On this I don’t agree. Understanding might be a bad choice of words, but the baseball player must have a mental model of the dynamics of physical quantities relevant to his task, even if he cannot verbalize it easily. It is probably some phenomenological model, maybe similar to the Galilean variety, but arguably what we call understanding in e.g. biology is often not a more accurate model of the phenomenon at hand than this. And people will have such a model of weighing information sources, even if we don’t want to call it understanding as they cannot communicate it.

      • This topic is quite closely related to the research I do! Some would argue that your understanding is needlessly complicated. See https://en.wikipedia.org/wiki/Gaze_heuristic

        The essence is that a fielder doesn’t need to know where the ball will land. He/she only needs to be there when it comes down. On a similar note, this procedure (run in reverse) can help one to avoid a disastrous collision while driving. Therefore, no intricate mental model, differential equation, or other complex strategy is necessary to solve real-world problems such as these.

        A good read on the general topic of simple decision-making strategies: https://www.edge.org/conversation/gerd_gigerenzer-smart-heuristics

        There’s a lot of vigorous debate over this very point–do people use such simple strategies, or do they use the far more complex type of information-integration that you propose? I think this field remains quite fertile for empirical studies!

  4. The subject is important and the reading is thought-provoking (although unnecessarily obtuse, in my view) – but I fundamentally disagree with much of it. The basic thesis – that analyses of the science ignorance problem suffer from the “denominator problem” of failing to consider the many cases where opinions are not divided – I think is wrong. The areas that have public agreement might be many, as the author suggests, but I think that is the result of public indifference, not a reflection of agreement and understanding of scientific consensus. The one example discussed in some detail is that of genetically modified (GM) foods, which the author notes is controversial in Europe but not in the US. No explanation is offered as to why the difference. And, while scientific agreement generally exists that GM is not a threat, it is a very real issue of disagreement in Europe.

    I think there is a simpler explanation that the author has insufficiently considered. When there are large financial interests at stake, then public interest can be provoked and manipulated. Lacking clear interests, the issues are simply ignored by the public. But I think the author mistakes lack of attention for agreement. It is simply lack of attention.

    The article makes more sense when it explores the insurmountable obstacles to public understanding of scientific issues, and the fact that public knowledge of these issues is socially constructed. I didn’t follow his argument about how this knowledge is socially constructed, but I think it misses important elements. This is where status, prestige, pedigrees, and other signals become important. If the researcher comes from an Ivy League school, they are given more credibility. If they publish in peer reviewed publications, they are granted more authority. And so on. In this superstar economy, signals become overly important. And that is why so much is invested in obtaining the necessary signals. Add in large financial incentives and you have the recipe for manipulating public understanding. Almost by definition, there are a limited number of issues which will rise to the level of this paradigm. Thus, I think this explains the “selection problem” the author cites much more than the offered explanation that there is mostly agreement on the science in the myriad cases that do not rise to this level.

    • I think your reaction to this piece is well-founded.

      A small note: you wrote “In this superstar economy, signals become overly important.” I would argue that ‘signals’ such as the ones that you describe have ALWAYS been important to humans, as they are to many other mammals–and they will always remain important.

  5. Thanks for highlighting this paper. I agree that there needs to be greater effort dedicated to understanding scientific epistemology, specifically the process by which science makes its adherents face up to unpleasant and/or alarming facts.

  6. I think one must consider a couple of objective basic effects on the perception of science by the Public (or Pub, as friend call him): (i) some phenomena are vastly more complicated than others. So the perception that X-rays are better understood than climate is valid. (ii) areas of research that have lower barriers of entry will have more incompetent practicioners. This is not to say that there are no brilliant people in e.g. psychology (where you need basically no knowledge of formal methods to become an Academy member), but press releases may easily be dominated by unreliable information. (iii) there are very strong positive feedback loops present: once a topic generates some controversy, it’s easy for laymen to conclude that this is one of “those” research areas, and just assume that every talking head is politically motivated from then on. Furthermore, this creates a repulsive effect for smart researchers to enter the area, many of whom actively avoid politically charged topics.

    So with all this in the picture, some incidental initial perturbation, like some political candidate capitalizing on a certain topic, might set the vortex in motion, meaning that the set of highly controversial topics is essentially random. I might be wrong on this.

    • Your first paragraph makes a lot of sense, but I’d change the last part (“meaning that the set of highly controversial topics is essentially random. I might be wrong on this.”) to something a bit milder, perhaps “implying that there is inherently a huge degree of uncertainty in predicting which topics will become highly controversial.”

  7. @Ibn: “but the baseball player must have a mental model of the dynamics of physical quantities relevant to his task, even if he cannot verbalize it easily.”

    In some sense, yes. But the subject was how people understand scientific ideas or bypass them by relying on others. Years ago peoples’ model of an object sliding off a table was studied (no references, sorry). Most people thought it would get to the edge of the table and then drop straight downwards. Most of the rest thought it would fly off horizontally and slow down to a stop, then drop vertically. Very few thought it would arc gradually downwards.

    I’m sure most of those people could have caught the object out of the air as it began to fall. But their scientific knowledge was woefully lacking.

Leave a Reply

Your email address will not be published. Required fields are marked *