Imperfectly Bayesian-like intuitions reifying naive and dangerous views of human nature

Allan Cousins writes:

After reading your post entitled “People are complicated” and the subsequent discussion that ensued, I [Cousins] find it interesting that you and others didn’t relate the phenomenon to human propensity to bound probabilities into 3 buckets (0%, coin toss, 100%), and how that interacts with anchoring bias. It seems natural that if we (meaning people at large) do that across most domains that we would apply the same in our assessment of others. Since we are likely to have more experiences with certain individuals on one side of the spectrum or the other (given we tend to only see people in particular rather than varied circumstances) it’s no wonder we tend to fall into the dichotomous trap of treating people as if they are only good or bad; obviously the same applies if we don’t have personal experiences but only see / hear things from afar. Similarly, even if we come to know other circumstances that would oppose our selection (e.g. someone we’ve classified as a “bad person” performs some heroic act), we are apt to have become anchored on our previous selection (good or bad) and that reduces the reliance we might place on the additional information in our character assessment. Naturally our human tendencies lead us to “forget” about that evidence if ever called upon to make a similar assessment in the future. In a way it’s not dissimilar to why we implement reverse counting in numerical analysis. When we perform these social assessments it is as if we are always adding small numbers (additional circumstances) to large numbers (our previous determination / anchor) and the small numbers, when compared to the large number, are truncated and rounded away; of course possibly leading to the possibility that our determination to be hopelessly incorrect!

This reminds me of the question that comes up from time to time, of what happens if we use rational or “Bayesian” inference without fully accounting for the biases involved in what information we see.

The simplest example is if someone rolls a die a bunch of times and tells us the results, which we use to estimate the probability that the die will come up 6. If that someone gives us a misleading stream of information (for example, telling us about all the 6’s but only a subset of the 1’s, 2’s, 3’s, 4’s, and 5’s) and we don’t know this, then we’ll be in trouble.

The linked discussion involves the idea that it’s easy for us to think of people as all good or all bad, and my story about a former colleague who had some clear episodes of good and clear episodes of bad is a good reminder that (a) people are complicated, and (b) we don’t always see this complication given the partial information available to us. From a Bayesian perspective, I’d say that Cousins is making the point that the partial information available to us can, if we’re not careful, be interpreted as supporting a naive bimodal view of human nature, thus leading to a vicious cycle or unfortunate feedback mechanism where we become more and more set in this erroneous model of human nature.

6 thoughts on “Imperfectly Bayesian-like intuitions reifying naive and dangerous views of human nature

  1. A recent example in New Mexico of Bayesian rationality:

    https://www.washingtonpost.com/politics/2022/06/17/new-mexico-county-weighs-defying-order-certify-election-results/

    “[Couy] Griffin, the founder of Cowboys for Trump…refused to back down from assertions that the machines were not secure or apologize for leading a charge against a normally straightforward procedural vote that caused a week-long uproar.”

    “My vote to remain a no isn’t based on any evidence, it’s not based on any facts, it’s only based on my gut feeling and my own intuition, and that’s all I need,” Griffin said.

    According to the article, Griffin “was sentenced to 14 days in jail for entering a restricted area during the Jan. 6, 2021, Capitol attack.” If Griffin’s views and behavior do not put a damper on the belief that humans, unlike machines, engage in Bayesian revision, what will? Indeed, he may well turn into being a hero because he did not commit the political sin known as “flip-flopping.”

    • Maybe Griffin’s just meditates a lot and is openly acknowledging the cognitive biases we all engage in?

      Just kidding, of course.

      Well, maybe not entirely.

  2. Interesting to see this here on the day I posted my review of Seth’s recent book

    https://junkcharts.typepad.com/numbersruleyourworld/2022/06/book-review-dont-trust-your-gut.html

    The book is a review of the recent “big hits” of big data analytics. One observation I have is that despite the volume of data, these researchers appear to want to unearth universal truths that apply to everyone – which is related to this propensity that Cousins described above.

    Another tendency I found in those studies is reifying Bayesian-driven model intuitions, which is not quite Andrew’s headline but could be. It appears that Bayesian models may have been used to fill in the “gaps” of the data, i.e. those cells with few or no data points have estimated averages mostly driven by priors. Then these estimates are mined via pairwise comparisons to come up with “insights”. Such interpretations blur the interplay between priors and data inside the Bayesian framework.

  3. I agree wholeheartedly with the statement that people are complicated. But I would add that one of the issues I often see is that this complication is used to vilify or justify individuals based on complications that are a lesser part of who they are in what they mean to humanity or society at large.

    I ask for examples in class of individuals who are truly evil. Adolf Hitler comes up a lot. And then I talk about how he had people that loved him, was kind to animals, and his wife loved him and, from all reports, he was good to her when they were together. We never even quite get to the good things that the Nazi party did for Germany. I don’t relate that to justify him (or Nazis) but for them to consider that even though everyone is complicated, both good and bad, it’s both OK and justified to ignore the opposite side in some cases.

    • Psyoskeptic:

      The good a person does should not wipe out the bad, or vice versa. Fortunately, we are not usually in a position where we have to make some sort of balancing and decide whether net contributions are positive or negative.

      For example we can applaud R. A. Fisher’s mathematical contributions and research creativity while being unhappy with his support of Nazis and his poor judgment when working with cigarette companies. A great guy to analyze your data in some settings but not others.

      Or, with my former colleague discussed here I can applaud his dedication to teaching quality when he was department chair, while being unhappy with his closed-mindedness as department chair when it came to research he didn’t understand—unfortunately, that was an attitude he shared with most of his colleagues at the department at the time—and also being unhappy with his sexual harassment and his poor judgment working for the legal defense of a killer. If I were on some committee deciding on his employment, I guess the decision would be easy: the sexual harassment would probably be enough to get him fired, and the fact that he cared about teaching is relevant to his general job performance but not so much to whether he should be exempt from the sexual harassment rules. Similarly, I’d respect his judgment on some statistical questions while recognizing his abject incompetence in others. Indeed, in many ways he was worse than incompetent because he wasn’t just incompetent, also he didn’t seem to recognize how little he understood in areas outside his specialties.

      Anyway, my point is just that, it was not up to me whether he got fired or forced to resign, so there’s no need for me to make some sort of weighing of his positives and negatives. I can just be aware of different positive and negative aspects of his personality and his deeds. I feel like a lot of trouble is caused by people feeling the need to make an overall judgment when this is not really necessary.

    • I don’t think you need to ignore the good parts of someone, even in extreme cases like Hitler. This is obviously an oversimplification, but I think people’s views on morality sort of simulate a probability distribution on a bounded line where one boundary represents “good” and the other represents “evil”. The peak of the distribution tells you where that person typically falls morally, but there’s going to be some degree of skewness and/or multi-modality that tells you how often they break from that trend.

      In Hitler’s case, his actions were so deplorable that the peak would be very close to the “evil” boundary. His good actions would be enough to shift the peak so that it’s not exactly *on* the boundary, but still extremely close. We would still conclude he was an evil person, without outright ignoring his good qualities.

      I guess my point is that I don’t think there’s any need for an a priori absolutist viewpoint when more nuanced views *generalises* to an absolutist conclusion when it is genuinely warranted.

Leave a Reply

Your email address will not be published. Required fields are marked *