Who’s your favorite expert?

Dan Kahan writes:

A while back you had some posts on how it is that we know we can trust science on climate change, & even more interestingly, how we know which scientists to trust, and how much we rely on our own understandings of the science. Here’s a paper relating to that attached–results showing that people have a tendency to identify someone as an “expert scientist” on an issue (climate change, gun control, nuclear power) only if the putative expert expresses a position congruent with people’s cultural predisposition on the issue.

Ooohh–ugly tables and graphs! I should still probably read this sometime, though–it looks important. Perhaps it relates to this story.

7 thoughts on “Who’s your favorite expert?

  1. Hello. This looks potentially interesting to me but the link is coming up dead for me. Maybe the problem is on your end?

    Mike Spagat

  2. comments very welcome, including ones on how to improve appearance of the figures. Am so disappointed b/c I am very much sold on Gelman et al. philosophy on graphic presentation of multivariate regression results. But I was held back in grade school b/c I was so terrible in art, so I'm not surprised — only eager to improve so I can finally escape the taunting!
    (not sure why Mike had problem w/ the link–maybe something somewhere was down for maintenance at 3:54 am)

  3. Dan,

    I just read this through and find it extremely interesting that that these personality characterizations take you so far in predicting how people perceive scientistics as lining up on various issues.

    This suggests some kind of respect for science in the sense that people seem reluctant to say something like "scientists generally agree that proposition A is true but I still think it's false." That is, if people think that A is false then they also tend to believe that scientist agree with them on this.

    Simultaneously, people seem to distrespect science in the sense that many appear to make little effort to discover what scientists actually think. Instead, people seem to just jump to the conclusion that scientists agree with them.

    One thing that worries me a bit about your methodology is that you seem to force people to express opinions on things where many people probably don't have much of an opinion. That is, "Most Agree" + "Most Disagree" + "Divided" seem to pretty much add up to 100%. But a lot of people must not don't really know what kind of scientific consensus may or may not exist of some of these issues.

    In fact, some of these answers on where scientiest stand may depend on which scientistics we are talking about. It seems possible to me, for example, that nuclear engineers are more positive about nuclear waste disposal than are scientists as a whole. So on some of these issues I'm not sure it's totally clear whether there is a consensus and, if so, what it actually is.

    Nevertheless, in principle in the first part of the study you are asking people about matters of fact and it is very striking that, depending on their personal tendencies, people arrive at very different versions of the facts.

    I find the second part of the paper less striking. If I'm pretty sure that having people walking around with concealed weapons makes life more dangerous then I probably won't be very impressed by the credentials of a guy who says otherwise. If this person were to then present me with a mountain evidence suggesting that I'm wrong but I continue to rate him as a non-expert that would be different. But the evidence you give for your subjects to question their own judgement is very weak. Essentially, it is that the purported expert is associated with top academic institutions, wears a suit and is a not-so-young white male. I don't think such information would or should shake the confidence of many people.

    Mike Spagat

  4. thanks much, Mike for the feedback.
    Your reaction to the experimental component is extremely intersting to me. There is an interesting debate about extent to which using priors to help assess probative force of new information is consistent w/ Bayesianism (of course it is; Bayesianism doesn't care how you figure out the likelihood ratio of new information) or whether and when it is normatively sensible strategy for decisionmaking under uncertainty (this is interesting part). You say that when putative expert disagrees with a person on an issue like gun control, the putative expert's credentials (elite education, elite university position, membership in National Academy of Sciences) is *not* to justify treating the putative expert as a genuine expert; I take it then, that if the putative expert *does* agree with one's view, then you'd say one can treat that putative expert's credentials as confirming expertise. In that case, you are necessarily saying one's priors should inform the likelihood ratio one assigns to the new information that "this putative expert believes x" on guns. If carried to extreme, no one who uses this view would change their mind. Less extreme — I'll call person who *disagrees* an expert (and weight his views accordingly) only if he supplies me w/ *more* evidence than someone who agrees w/ me– still creates an inertial bias in favor of one's priors wi/ a Bayesian scheme. It woudl also explain why expert opinion doesn't change minds very readily — people who start w/ different priors on what expert consensus will discount any new evidence that "experts" feel otherwise! Some commmentators treat this sort of endogeneity between priors and likelihood ratio as an obvious barrier to rational decisionmaking. See, e.g., Zimper, A. & Ludwig, A. On attitude polarization under Bayesian learning with non-additive beliefs. Journal of Risk and Uncertainty 39, 181-212 (2009); Rabin, M. & Schrag, J.L. First Impressions Matter: A Model of Confirmatory Bias*. Quarterly Journal of Economics 114, 37-82 (1999). But for a really cool paper — cool b/c great experiment on how experts on statistical methods *are* influenced by their agreement or disagreement with conclusion this when evaluating quality of methods section of empirical paper, and cool b/c it takes a different view (I think closer to your view if I am fairly characterizing it) of whether this is normatively okay, see Koehler, J.J. The Influence of Prior Beliefs on Scientific Judgments of Evidence Quality. Org. Behavior & Human Decision Processes 56, 28-55 (1993). I think one needs to embed the decisionmaking in larger utility function that considers error costs and cost of alternative to evaluating quality of new information based in part on priors… But anyway, the experiment in the paper is meant only as evidence that this is what explains disagreements about issues on which there is "expert consensus" — readers can decide if this is good thing or bad.

    .

  5. I saw this entry when it appeared, immediately downloaded the paper, and several weeks later finally got around to reading it. But I have some thoughts on it, so here they are:

    The central hypothesis is that people evaluate experts according to consistency with their values. An alternative hypothesis is that people simply evaluate experts according to consistency with their own opinion on the subject: people find experts who agree with them more credible than those who don’t. One way to test the alternative hypothesis against the cultural cognition hypothesis would be to add interactions of ideology and/or party ID with risk in the analysis on p. 21, since the “low-risk” positions are associated with the political right. That point suggests that in future work it would be useful to consider some issues that aren’t typically seen in left-right terms, but which involve assessments of risk, like the importance of taking measures to protect against various possible natural disasters. If possible, it would also be interesting to consider issues on which the high-risk evaluation was associated with the political right. The discussion of the cultural cognition thesis on p. 2 suggested that hierarchical/individualist values would go with resistance to claims about environmental risk. However, there seem to be some issues, like perceived threat from terrorism, on which the right sees more risk. They may not qualify as “environmental,” but in that case it would be desirable to say more about the scope of the thesis, especially because there’s a hypothesis that sensitivity to perceived threat is a root of conservatism (John Hibbing).

    The results involving the interactions on p. 21 were hard to interpret because of the number of variables involved and the differing scales. It would be helpful to produce a graph which showed the predicted values for different combinations of the values of the hierarchy and individualism scores. That is, the vertical axis would represent predicted credibility, and the horizontal axis could represent hierarchy and/or individualism scores. Then you could have a line representing the predicted evaluation of the “high-risk” expert and another line representing predicted evaluation of the “low-risk” expert. I say and/or because with two variables, you would need to try to depict a three-dimensional graph, which could also be hard to read. But there might be some way to combine them both into one without losing much information. In any case, I think something along those lines would help in presenting the results. It would also be useful to supplement the table on p. 21 by adding the results from models including hierarchy by risk and individualism by risk interactions, but no three-way interaction. (The “step 1” results could be omitted to save space). Three-way interactions are hard to grasp, so it’s good to start with the simpler model and build up to them.

Comments are closed.