Does the existence of widespread belief in political disinformation demonstrate that humans are not Bayesians?

Paul Alper writes:

Here is sparkling evidence that Bayes theorem needs revision.

The link points to a news article pointing to prominent figures in the Republican party and right-wing media circulating misinformation regarding the attack on Paul Pelosi, husband of the Speaker of the House.

Alper continues:

As I have insisted several times, Bayes is strictly aspirational/normative and not descriptive and thus does not reflect what humans, as opposed to machines, actually do. Five of the above 21 claim that Paul Pelosi knew the attacker and was involved in an extramarital affair/prostitution with him. Six or so “raised doubts.” Others just had doubts. . . .

It really does not matter. “Belief Perseverance” and “Conformation Bias” always win out. Once again, I quote the insurrectionist, Couy Griffin: “My vote to remain a no isn’t based on any facts, it’s only based on my gut feeling and my own intuition, and that’s all I need.”

I’ll look at this one from a few different angles.

Bayesian inference

First, do I agree with Alper that “Bayes theorem needs revision”? No, I don’t. Bayes’ theorem, like any mathematical result, is only as good as its assumptions. In this case, the problem is not with “the prior” but rather that there’s no mathematical model here at all. Roughly speaking, you could say that the “parameter” theta is something about why Paul Pelosi was attacked and the “data” y are whatever what we heard in news reports—but there’s no model at all, no numbers, etc. So Bayes’ theorem doesn’t apply at all, at least not until some details are specified.

By analogy, what if someone asked you, Are you more happy than you are sad? This might be an impossible question to answer, but, if so, we wouldn’t conclude that “subtraction needs revision” or that humans don’t subtract. It would me more accurate to say that happiness and sadness are not numbers, and no model has been specified to map the emotions of happiness and sadness into numbers that can be added or subtracted.

By saying this, I’m not claiming that people are Bayesians. Bayesian inference is hard enough to do even by professionals, so it would be a bit much to expect that people could do it internally without even trying. My point is only that there are too many steps between belief in political disinformation and any mathematical model for this example to be used to make any general claims of that sort.

If you were to try to set this up as a Bayesian inference problem, you’d have to specify your model more carefully. And, as we recently discussed, this is kind of impossible because of the need to specify a data model (a “likelihood”) conditional on the “all other possibilities” option.

So, yeah, I do agree with Alper, in this sense: Suppose you were to read about the Pelosi story and make a judgment such as, “I think there’s an 80% probability that the official narrative (the attack was from a right-wing extremist who was trying to attack Nancy Pelosi for political reasons) is true and a 20% probability that something else was going on.” There’s no way that this can be considered a Bayesian calculation, or even an approximate Bayesian calculation, because none of the inputs to a Bayesian analysis are there.

In high-dimensional space, no one can hear you scream

One way to understand how this sort of story can play out—with disinformation (that is, lies), misinformation (mistakes), and plain old confusion—is by considering probability distributions in high dimensions. Or, to put it more simply, if you look at enough things, you can find data that seem to confirm or disconfirm any story. That’s how a lot of conspiracy theories work. Again, this is consistent with Alper’s point that people don’t aggregate information rationally.

Connections to junk science

Junk science can be a better thing to talk about than political disinformation because the stakes are usually lower. Not always—consider, for example, the gremlins guy and the Freakonomics team who’ve been doing their best to muddy the waters regarding global warming, and of course there was that econ paper with the Excel error—but most of the junk science we’ve seen has been about silly things like sex ratios and ESP ovulation and voting and himmicanes and talking monkeys: researchers pushing their pet theories and going on NPR or getting all-expenses-paid beach vacations courtesy of the Edge Foundation, but not threatening our way of life.

Anyway, junk science also has that high-dimensionality thing, where with enough data you can find evidence in support of just about any hypothesis. The misunderstanding of this point is related to what Tversky and Kahneman called “the law of small numbers,” by which people expect that samples of small, noisy data will look just like what is expected from theory. Researchers make strong conclusions from noisy data that do not really support these claims (see here for a discussion of an example).

A couple years ago we discussed the way in which junk science as gone upmarket over the past century: it used to be autodidacts and the occasional eccentric professor who would push astrology, ESP, ghosts, etc., but in recent years comparable ideas have been promoted by academic leaders, the National Academy of Sciences, etc. I guess something similar has been happening with political disinformation and conspiracy theories, which formerly were the province of outsiders but now are embraced by nearly half of America’s political establishment.

Political science

What’s going on with these examples of political disinformation? It’s:
1. Unlikely stories
2. That are offensive
3. Believed without regard to evidence
4. Promoted by leading political and media figures
5. With potential political consequences.

It takes all these features in combination to get the full horrible. We don’t care so much about JFK or UFO conspiracy theories, but the idea that lots of people still believe what Alex Jones tells them . . . that’s more scary.

My main reaction to Alper’s comment is that, by focusing on the question of individual irrationality, he’s missing step 4, the promotion by leading political and media figures. The problem is not just buffoons such as Ted Cruz or Al Sharpton who get attention by stoking the outrage machine; it’s all the more mainstream political and media figures who promote these guys or politely set aside their records of promoting disinformation.

That said, I see the relevance of Alper’s point, in that individual irrationality is necessary for the whole system to work. The dudes who framed Dreyfus could only do so effectively because there were enough people at all levels who were willing to believe in the absence of evidence. So you could say that political manipulators are weaponizing human cognitive failures. But I think it makes more sense to consider the system as a whole, to say that what makes disinformation work is a network of people who see gain in spreading these stories. Bayes’ theorem, as a model of individual or institutional rationality, doesn’t really address this network phenomenon.

43 thoughts on “Does the existence of widespread belief in political disinformation demonstrate that humans are not Bayesians?

  1. While I agree that it can be problematic to map human behaviour and belief onto a mathematical theorem, I do see the Paul Pelosi example as being quite emblematic of Bayesian thinking. Politicians are applying a likelihood function where the probability of obtaining votes – or maybe feeling part of the “team” – is higher when the conspiracy hypothesis is believed to be true. If you have a weak prior on the conspiracy hypothesis, you need a strong belief that votes are conditional on the conspiracy hypothesis being true. Of course, the issue is that the likelihood statement isn’t based on a whole lot of good data, maybe there’s some polling in there, but, still, it’s likely just built on a hunch or subjective belief. You’re not actually solving for theta. As well, just what “y” stands for is not always well defined; it’s a mix of votes and keeping up with the Joneses and stuff like that. But I still think the acceptance of the conspiracy hypothesis can be broken into a prior that is updated based on how strongly tied your desired outcome is to the hypothesis being true.

    • This seems right to me. At least *some* politicians are not updating based on what they think happened but what they think their voters want to hear them say. The other politicians (in government, in media, etc.) are manipulating the processes through which the rest update for gain, however construed.

  2. ” Bayes’ theorem … is only as good as its assumptions. ”

    …what’s the difference between human Beliefs & Assumptions in any aspect of human cognition ?

    either one might be based upon good/bad evidence or misinformation/disinformation

    the chosen semantics here seem unhelpful in answering whether “widespread belief in political disinformation demonstrate that humans are not Bayesians? “

  3. I’m mostly in agreement with Andrew, but it should be noted that humans might be informal or fuzzy Bayesians in a way that could capture the problem of disinformation, or conspiracy theories, or really just anything else.

    The basic distinction is something like political-Bayesianism (as practiced by the median politician, or median voter), and the scientific-Bayesianism described above. Much like how we cannot expect a random individual pulled off the street to know or calculate a Nash equilibrium but might still expect him or her to think rationally about goals and pursue behavior (most of the time under broad conditions) that he or she reasonably expects to secure those goals. The median MAGA republican is not going to specify a parameter theta and update priors with any mathematical model—but he likely holds priors about the world that lead him to update his priors in counter-productive ways.

    If I read the news as a strongly committed Democrat (Republican), I hold priors about how much weight to give mainstream news reporting, or which news sources I will read about any given political event. In that instance, I as as the strong Democrat conclude—to pull Andrew’s example—that there is an 80/20 likelihood of the news story being correct, or that something else is going on. But as a strongly committed MAGA Republican perhaps I start with 60/40, or even lower, because I generally distrust mainstream reporting. (It wouldn’t surprise me if some of them started with priors that put the mainstream narrative underwater.)

    Moreover, updating in this fuzzy political-Bayes model does not proceed deductively through math, but inductively or abductively through the social world they created which itself is further influenced by the biased priors on how to update (i.e., which news or “news” sources to trust).

  4. Rather than defend my contention about humans and Bayesian revision, I note that
    1. Couy Griffin said when sentenced in June,
    “My actions on January 6 was a result of my faith,” said Griffin, who has earlier said he went to pray over the crowd. “I live a life devoted to the Lord.” 
    2. Paul Pelosi was attacked on October 28, 2022 and so the incident has been upstaged by the election itself. The current status of election denial winners and losers can be found at
    https://www.cnn.com/interactive/2022/11/politics/election-deniers-winners-losers-midterms-2022/
    3. Since then, attention has become focused on George Santos.
    4. Despite the odd lawsuit, Alex Jones seems to be doing quite well.

  5. It sounds like the basic question is whether people behave as Bayesians, much like the questions economists have about whether people behave “rationally” or not. Behavioral economics examines a number of ways in which people are not “rational” and deviate from what that theory would predict. Bayesian analysis would predict how beliefs should be updated in the face of new information. I have no doubt that there are examples consistent with, and at odds with, this theory.

    The Pelosi case and the 2020 election case share these features: the initial belief is strongly held and seems unrealistic to many of us. Additional evidence is collected (police findings regarding the Pelosi attack, investigations into claimed election improprieties) that lead most of us to lower our assessment of the initial belief that these theories were screwy to begin with. But for some (many?) of the initial believers, the new evidence is just further evidence that they were right to begin with. Does this provide evidence that people are not Bayesians and that Bayes is a normative theory and not descriptive?

    Andrew focuses on the mathematical question of whether Bayes Theorem applies without a model with quantitative probabilities. I question whether these cases are relevant at all. One person may view the “evidence” as decreasing the probability of the conspiracy theory while another may view the same “evidence” as increasing it. Such “belief perseverance’ and “confirmation bias” is clearly at work, but I don’t see this as relevant to the question of Bayesian thinking.

    Let me put this more starkly as a question: if someone’s prior probabilities are out of whack with what we might consider the existing “evidence,” does this mean that their beliefs/actions are not Bayesian? Even if they update their priors by reading the evidence in the opposite way than most of us would, does that mean they are not Bayesian?

    Perhaps I am agreeing with Alper regarding Bayes being normative and not descriptive. But I’m confused as to whether Bayesian analysis depends on the prior, how the prior is updated based on new information, both of these, or neither? I am legitimately confused.

    • Dale:

      Again, I think that a big thing that’s missing in the discussion of individual rationality, motivated reasoning, etc., is the role played by active manipulators, people such as Ted Cruz who presumably know what is actually going on but still spread wrong arguments so as to mislead people.

      • Andrew:

        Couy Griffin seems to me to be a true believer, of whom there appears to be an overabundance, willing to literally fight for their beliefs. Sincere but, to use a technical term, nuts and dangerous. As to Ted Cruz, his Princeton and Harvard credentials makes one wonder what goes on in those places.

        • Paul:

          Again, my point is to focus on the interaction of the true believers and the political entrepreneurs. Without the entrepreneurs, whether they be Ted Cruz, Al Sharpton, or the old-school spreaders of the Protocols of the Elders of Zion, there wouldn’t be all these convenient stories for the true believers to believe. Without the true believers out there, the entrepreneurs would have no need to spread the stories.

      • Andrew –

        > Again, I think that a big thing that’s missing in the discussion of individual rationality, motivated reasoning, etc., is the role played by active manipulators, people such as Ted Cruz who presumably know what is actually going on but still spread wrong arguments so as to mislead people.

        I disagree. I think you have the direction of causality wrong.

        People with a pre-existing “motivation” to see what Cruz says as confirming their views will do so, just as those with a pre-existing belief that he’s a grifter will see what he says as confirming evidence that he’s a dope.

        Anyway, bottom line is that Bayesian resoning and motivated reasoning are hardly mutually exclusive.

        There is (virtually) no Bayesian resoning without “motivation,” as “motivation” (pattern recognition, identify-organized psychological attributes) are hard-wired.

        The best you can do is try to control for the effect, particularly when dealing with polarizing topics.

        • > Again, my point is to focus on the interaction of the true believers and the political entrepreneurs. Without the entrepreneurs, whether they be Ted Cruz, Al Sharpton, or the old-school spreaders of the Protocols of the Elders of Zion, there wouldn’t be all these convenient stories for the true believers to believe. Without the true believers out there, the entrepreneurs would have no need to spread the stories.

          That, I think, is closer to my view.

          Let me put it this way:

          How many people would have no opinion on a given topic until they hear what Cruz has to say about it?

          But I still think there’s too much emphasis on the “entrepreneur” and the “story.”

          (imo) They are both just manifestations of the underlying dynamics (pattern recognition, human psychology and cognitive biases) of motivated resoning. Neither is casual. And nether are the causal when aggregated through interaction. (imo)

    • The issue isn’t one of priors, it’s of likelihoods…

      “If there were a conspiracy to protect Pelosi from revelations of his intimate affairs, then of course the police in San Francisco would say that he was attacked by a right winger…”

      If you have certain models for the data, then EVERYTHING is evidence your initial gut feeling was right

      People think of “the data model” as some objective thing in the world we can all agree on, and only the prior is controversial. It’s so far from true it’s kind of insane that people act like that.

      • People think of “the data model” as some objective thing in the world we can all agree on, and only the prior is controversial.

        Bayes’ theorem can incorporate as many “data models” (likelihoods) as you want. The posterior for a given explanation depends on how well each explains the evidence weighted by the prior probabilities.

        The crazy thing is trying to judge one possible explanation without regard for the others, ie ignoring the denominator and only looking at the numerator (unnormalized posterior). Or even ignoring the numerator prior and only looking at the likelihood.

        • Still, if you’re working informally “in your head”, it’s easy for people to post hoc modify their model of the world to say that “observed thing X is evidence for my world view”.

          “they couldn’t find evidence of election tampering in 2020” …. “well of course not, because of course people who tamper with elections will cover it up”

          “they found evidence of election tampering in 2020” … “well of course they did, because fraud was widespread”

          etc.

          If your world view automatically implies everything seen is predicted by your world view… then you’ll never change your mind. Why would you? your world view is infallible!

        • If your world view automatically implies everything seen is predicted by your world view… then you’ll never change your mind. Why would you? your world view is infallible!

          For the same reason we observe the “god of the gaps” phenomenon. Eventually the explanation making precise predictions wins out over the vague one and supplants it.

          The likelihood for vague “explain anything” explanations is small and flat everywhere. Bayes theorem rewards explanations that make precise “risky” predictions that turn out accurate.

          Of course you can set your favorite prior to one and the rest to zero, but even this “crazy” behavior is described by Bayes theorem.

          So I really have no idea what the OP is going on about.

      • Daniel –

        > If you have certain models for the data, then EVERYTHING is evidence your initial gut feeling was right

        That’s how I see it.

        But then even if you’d disagreed, I’d see your comment as evidence I was right. :-)

  6. I’m not sure how anyone can be anything other than Bayesian in an informal sense. If you have some uncertain opinion, incremental evidence causes you to update your probabilities in the direction of your interpretation of that evidence. Formal Bayesianism, in which people put numbers to these probabilities and models of the probabilities given the incremental evidence and optimally update is, like Dale Lehman says above, akin to evidence over whether or not people really optimize.

    All of Paul Alper’s and Andrew’s issues fit nicely into this framework. Cory Griffin apparently has no uncertainty, so he is behaving in a perfectly Bayesian fashion. Those who assess evidence from Ted Cruz as highly reliable update their priors by being more convinced. Those who regard New York Times reporting as inversely truth-related update their priors by being more convinced of a conspiracy theory the times pooh-poohs. Bayesianism depends on p(Evidence|Hypothesis). The MAGA model of that calculation doesn’t appear to be Paul Alper’s, but that doesn’t make its adherents nonBayesian.

    • Jonathan –

      > Those who assess evidence from Ted Cruz as highly reliable update their priors by being more convinced.

      I doubt that’s how it works (for the most part).

      They are already convinced or not, and what Cruz says is filtered as confirmation of a preexisting opinion, or ignored.

      For example, how many people (who had “priors” on this topic) would you guess didn’t believe that material evidence was being withheld, but then started to think maybe it was being withheld because Cruz expressed doubts? How many people (with “priors”) thought that evidence was being withheld, but then believed otherwise when they read the NYTimes?

      I think it’s unlikely that many people “update priors” (on this or other similar topics) after hearing Cruz or reading the NYTimes.

      • “They are already convinced or not, and what Cruz says is filtered as confirmation of a preexisting opinion, or ignored.”

        First, I think we disagree about the definition of “convinced.” By your statement above, no one ever changes their mind… ever. Further convictions which cannot be changed have no use for confirmation. What would be the point?

        Just about every human conviction is conditional on reality, though some are of course much influenced by reality than others. (That’s just the strength of the prior.)

        If what you’re saying is that if (a) they trust Cruz; and (b) what Cruz says is not in the direction of their prior then (c) they ignore him — well, I disagree. It’s a counterfactual that awaits some sensible pronouncements from Cruz to test, however.

        I fully grant that people choose sources of information that are more likely to support their priors. Most people seem to lack a willingness to test their own convictions. But that has nothing to do whether or not, when confronted with evidence that *they believe* contradicts their priors they update those priors.

        • Indeed, there are for example many cases of people showing up in the hospital with severe COVID and wanting the vaccine because they’ve changed their mind about how bad it is. Of course, it’s too late at that point, but once it’s something they can’t ignore they do change their opinion.

        • Jonathan:

          I doubt there are a lot of people who trust Ted Cruz, any more than there are a lot of people who trust Hillary Clinton. Lots of people have voted for Cruz and Clinton because they expect them to support reasonable policies, but that’s not the same as trusting them. When Ted Cruz pushes a conspiracy theory, I don’t see this as convincing people; it’s just a way to keep the theory out there so more people are exposed to it.

        • Jonathan –

          Apologizing in advance, but I’m going to ramble. I hope it’s not too disorganized to follow (or for you to lose interest in following).

          > By your statement above, no one ever changes their mind… ever.

          Ok, that’s a big fat fly in my ointment – as Daniel points out.

          But here’s where I’m coming from….

          You say:

          >But that has nothing to do whether or not, when confronted with evidence that *they believe* contradicts their priors they update those priors.

          I’ve observed people arguing about climate change for a long time. As Dan Kahan liked to say, opinions about climate change (in general, among the public) aren’t about what you know but about who you are. That’s why there’s such a strong partisan signal in opinions about whether or not ACO2 emissions pose a risk by warming the atmosphere.

          Within that context, the “consensus side” has had to come to grips with the shortcomings of the “information deficit model” where as you probably know, reality seems counterintuitive; giving people evidence of how ACO2 emissions affect the atmosphere does not lessen dismissal of evidence of climate change among those who are predisposed to disbelieve that evidence. (The shortcomings of the information deficit model apply outside that context also).

          So with that said, I should still be careful about speaking in some kind of absolutes; yes, people do sometimes change their minds on issue – but I think there are a couple of important mediators/moderators in the causal mechanism of an existing “bias” changing – where new information might prompt an “updating of priors.”

          Primary, IMO, is the degree to which someone is “identified” with an existing viewpoint. What is the level of personal attachment or importance? Or perhaps the source of the information can be important.

          Within the climate change world (and indeed, in the larger world of political organizing) one idea is that people are more likely to update their priors (rather than double down on existing biases) if they like (or identify) with the person delivering the information and /or if that information is delivered as a personal narrative rather than in a clearly rhetorical frame that would likely trigger a hostile response. I know there’s some evidence that works. But then again we have examples like Bob Inglis, who got driven out of the Republican Party, despite being a pretty conservative fella, because he tried to lobby for support for legislation to address climate change.

          Another thing I saw in the climate change world – I ran across many “skeptics” who said that they were concerned about the risk of ACO2 emissions until they dug into the science, which then convinced them that actually, there’s nothing to worry about. They point to themselves as examples of people changing their minds when they learned new information. Except my impression is that they were mostly missing a critical aspect; they are almost all RWers or libertarian types. That ideological signal means something. I don’t think it means that their political views necessarily determined how they interpreted information, but instead that their political views mediated an underlying dynamic whereby their desire for identity clarity affected how they interpreted facts about climate change. As their Identification on the issue grew, so did the extent to which their identity affected how they interpreted the information.

          So you say this:

          > Further convictions which cannot be changed have no use for confirmation.

          I agree. It’s not that people have biases and then look for information to confirm their views. They have a bias and then because of that bias they interpret the information in such a way that it fits their bias. As their identification on the issue grows, so does that dynamic. I hope that distinction makes some sense.

          > If what you’re saying is that if (a) they trust Cruz; and (b) what Cruz says is not in the direction of their prior then (c) they ignore him — well, I disagree. It’s a counterfactual that awaits some sensible pronouncements from Cruz to test, however.

          Sure. Counterfactuals are hard. But I point again to Bob Inglis as a potentially instructive example.

          Trying to pull this together – I think context matters. This is all multifactorial and context dependent.

          We’re talking here about a situation that involves Nancy Pelosi and Ted Cruz. Those people who are paying attention and actually care about the issue are mostly going to be heavily identified one way or the other. Pelosi and Cruz are strong identity flags. As Andrew says below, it’s not so much that they “trust” Cruz (or the NYTimes), but they see Cruz (or the NYTimes) as an identity-signal.

          Yeah, I don’t think there are going to be very many people who are going to change their mind one way or the other based on what Ted Cruz (or the NYTimes) has to say about it.

          If they think it’s likely that Paul Pelosi was attacked by his gay lover, and that law enforcement is hiding relevant information because there’s a conspiracy going on where law enforcement is protecting the Pelosi’s, then by definition they’re going to be heavily identified on the issue. If they identify differently, they’re going to hear speculation that Pelosi was attacked by his gay lover, and that law enforcement is covering it up, and consider that speculation as nutty.

          IMO, that’s the perfect set up where the limitations of the deficit model will show up. With more information, pretty much no matter what that information is, or who delivers it, people aren’t going to “update their priors” but fit the information (no matter what that information is) within the parameters of their existing priors.

        • Andrew: if there isn’t anyone who believes (or disbelieves) Ted Cruz or Hillary Clinton, then their pronouncements aren’t evidence and then to nothing to disprove any form of Bayesianism.

          Joshua: Yeah, it was long, but I’ll adress two parts of it. (a) On your global warming example. If your point is that two people can see exactly the same CO2 evidence, purport to basing their opinion on evidence and still come to different conclusions about based on their political dispositions (and other multifactor psychological causes), I agree with you. But there can be several reasons for that, one of which is that they had more diffuse priors in the first place; another is that they have better (or worse) abilities to spot errors or biases in the analysis. Another is they are incompetent at evaluating evidence. None of this refutes informal Bayesianism, e.g. updating beliefs in the directions of perceived evidence.
          (b) Re your final paragraph: Maybe. BUt again, everyone has (under Bayesianism) their own internal model of p(Evidence|hypothesis). Your model makes that everywhere=1 for all people and all evidence for some particular subset of hypotheses. There are probably a few hypotheses where that is true, such as “Am I insane?” But for garden-variety political opinion, I doubt it as a general phenomenon,

        • Jonathan:

          I’m not saying that nobody believes anything that Cruz or Clinton says, just that I doubt that many people view them as trusted sources.

          Just for example, Colin Powell was a trusted source. When he said Iraq had nukes, that made many people more confident in the claim. When Ted Cruz or Hillary Clinton says something, this can have an effect by amplifying the signal. I don’t think many people will say, “Oh, Ted Cruz says X, so I believe X.” But when Ted Cruz says X, this can increase the spread of X, and then people can decide to believe X for whatever reasons. Nobody trusted the Russian secret police either, but that didn’t stop them from creating all sorts of problems in the world by spreading the Protocols.

        • Andrew: I don’t think it’s that hard to cram “signal amplification” into a Bayesian context, but whatever it is, it has little to do with whether or not people update their opinions based on their evaluation of evidence.

        • Thanks for the reply (couldn’t really follow the last part)

          Just randomly saw a 60 minutes clip about miracles at Lourdes. The last line…

          For those who believe, no explanation is necessary; for those who do not, no explanation is possible.

  7. Here is sparkling evidence that Bayes theorem needs revision.

    […]

    “Belief Perseverance” and “Conformation Bias” always win out.

    I don’t see any behavior contrary to Bayes theorem here. In fact it seems like a perfect example of it in action. Perhaps if you explain what kind of revision you have in mind, it would be clearer.

  8. A vasty better explanation of our flood of misinformation and scientific fraud and malfeasance is offered here:

    https://www.thenewatlantis.com/publications/reformation-in-the-church-of-science

    Here are a couple more in case you still think that scientists are devoted truth seekers.

    https://www.nature.com/articles/524269f
    https://royalsocietypublishing.org/doi/10.1098/rsos.160384 https://associationofanaesthetists-publications.onlinelibrary.wiley.com/doi/ftr/10.1111/anae.15297
    https://www.tabletmag.com/sections/science/articles/pandemic-science

    Of course people are biased. Conspiracy theories proliferate when there is evidence elite institutions are hiding the ball. As Musk says, the Twitter files show that all the “conspiracy” theories about Twitter, big tech, and the FBI and CIA were true.

    I would exercise caution in using climate science as an example here. It’s more an example of a narrative run amuck as it got taken up by political activists who are competing with each other for attention and money. They compete by being more extreme and loud than the competition.

    • Dpy:

      I followed your first link above. It’s interesting, also seemed a bit confused. For example, it says, “But in the second half of the twentieth century, science programs started to take on a rapidly expanding portfolio of politically divisive issues: determining the cancer-causing potential of food additives, pesticides, and tobacco.” This is kind of a weird way of putting things. There’s no reason why determining the cancer-causing potential of food additives, pesticides, and tobacco should be politically divisive: this only happened through a huge amount of public relations efforts by industry, basically taking straight science and trying to make it controversial.

      Regarding the idea that “scientists are devoted truth seekers,” I don’t think anyone is claiming that! It depends on the scientist.

    • Really? “All” of the conspiracy theories? So the CIA really did install Kim Jong Un as a puppet leader, and the three letter agencies really are under the control of a cabal of satan worshippers who harvest adrenochrome from the blood of fetuses and infants to stay young?

      Perhaps the spread of disinformation is caused by people too sloppy in their thoughts to circumscribe their claims, spamming the same set of links over and over and over again across multiple pages, desperate for acknowledgement.

  9. I agree with Andrew, but I also agree that people are irrational. So, it is a mistake to model the way that (almost all) people think using probability or Bayes rule or anything else like that.

    In an interview, Leslie Lamport was asked, “Is LaTeX hard to use?” He replied, “It’s easy to use—if you’re one of the 2% of the population who thinks logically and can read an instruction manual. The other 98% of the population would find it very hard or impossible to use.”

    My estimate is similar: 98% of the population does not think conceptually (they think using pattern matching).

    Many years ago, I was in a meeting at work. The project involved calculating the fractal dimension of grayscale images. (The image intensity gives the height of the surface above the plane. It was a silly project, but that’s not the point of my story.) The presenter showed half a dozen or so images and also the fractal dimension that he had calculated for each image surface. Two of the values were less than 2. This prompted a long discussion as to what was special about these images that the surfaces could have a dimension less than 2. There were about a dozen people in the room, all engineers, scientists, mathematicians, several with Ph.D.s. I and the other mathematician in the room tried to point out that the numbers less than 2 must simply be wrong. We did not succeed in ending the discussion. Of course, further investigation showed that the numbers were wrong (due to some numerical instability in how the slope of a line was calculated).

    My point is, here is a bunch of technical people whose model of the world is so soft and squishy that they can easily add surfaces of dimension less than 2 to it.

    • David:

      That reminds me of a talk I attended over thirty years ago by a well-known mathematical statistician. He had some example of an Anova where there were 120 data points and some quantity added up to 119.3, or something like that. I asked him why there was this rounding error and he didn’t get the point of my question at all. A small amount of reflection made it clear that the number had to be 120, but he hadn’t thought about it, I guess because it was an analysis with actual numbers and he was a theorist . . .

      Other times, I’m sure I’m the one in the room whose brain is turned off!

  10. Okay, so let’s replace the paradigm of Bayesian thought with a Frequentist one, which explains most of the examples above.

    There are two kinds of people: those who assume the null hypothesis is true, and those who assume it is false. Assume it’s false, and you’re in conspiracy theory territory. But it’s never precisely true, so there you go.

  11. For what it’s not worth, I think there is a Bayesian aspect to motivated reasoning having to do with priors, but mainly it’s bad analysis–not checking the data and neglecting terms and other errors.

    One of my favorite quotations is “All mathematicians make mistakes; good mathematicians find them”–Einstein (In which you could replace mathematicians with thinkers–so there is hope for all of us as long as we’re alive.)

    When Colin Powell told the UN something like, “As an old soldier, I know no reason why there should have to be a polished finish on the inside of a mortar tube,” my mind flashed on the scene from “Schindler’s List” where Schindler tells the camp guard he needs the children’s small hands to polish the inside of artillery shell casings. Later it turned out the finish was specified in the U.S. Army Manual which Rumsfeld had given Saddam Hussein.

    • Colin Powell has come into the discussion twice today. As it happens, we were classmates at P.S. 52 in the southeast Bronx back in the 1950s; we lived a few blocks from each other. In 2018, I revisited the school which is now named after him and, of all things, it has become a charter school for girls. Needless to say, his UN performance was a huge disappointment.

  12. I like the argument that for most people the cost of being wrong about national or international politics is very low and indirect, but the cost of being seen by peers to have hurting wrong opinions is high and obvious. So what most people say about national or international politics is an expression of group identity. If being right has immediate rewards and being wrong has consequences, most people update their beliefs in light of experience!

    • The posterior is used as an input for decision making (considering cost/benefits).

      There are no costs/benefits considered by Bayes theorem itself.

      Even if someone 100% believed Pelosi knew that guy beforehand, they still may decide to say something different in public.

      And since when do people think politicians say what they believe about themselves or each other? Or anything really? I mean that is a laughably childish belief.

      There is some seriously muddled thinking about Bayes theorem involved here and I wish more people were trying to get to the bottom of that.

      • Choosing to say something is modeled in Bayesian decision theory as taking the knowledge from the posterior for the state of the world, and the utility function for different actions, and choosing the action with the highest utility.

        Indeed, it’s easily possible to manufacture a utility where regardless of what you actually think about the world, you will always choose to say the same thing.

        This isn’t evidence against Bayes, just evidence for strong incentives pushing action in one direction.

Leave a Reply

Your email address will not be published. Required fields are marked *