Skip to content

I believe this study because it is consistent with my existing beliefs.

Kevin Lewis points us to this.


  1. Terry says:

    I’m confused. Isn’t this proper Bayesian reasoning? Shouldn’t we use our priors when evaluating new evidence?

    In the study, participants were shown random dot stimuli and asked to look for patterns. They then used the patterns they perceived in later evaluations. Why is this improper? Participants didn’t know the dots were random so why should they assume they were random? Wouldn’t it in fact be an errror to assume randomness? Should the experiment be done with patterns as well?

    In real world data, there very often are very strong patterns and it is often very important that these patterns be recognized quickly. What is the optimal strategy under such conditions? Don’t you have to confront these questions before you fault people for their decision making?

    The Kahneman crowd likes to say things like “people often see patterns in random data.” But the flip side is that people often don’t see patterns in patterned data. Don’t you have to balance these two errors? Indeed aren’t the two errors inevitable?

  2. Terry says:


    The paper addresses Bayesian priors. It looks at differential updating of priors based on the patterns participants perceived. It seems to be saying that people put too much weight on priors.

    • In some sense this is in fact the right thing to do in a Bayesian Decision Theory when you have asymmetric errors. You’ll want to err on the side that produces the less bad outcome. If there are patterns, and it’s important to find them (like, maybe it means the difference between eating well and dying of poison mushrooms) then you’ll be biased towards the thing that works better for you even if it isn’t correct, and you can nearly starve to death while surrounded by edible mushrooms mainly because you know that one wrong move with inedible Amanita mushrooms and you die.

Leave a Reply