Gaurav Sood’s review of the book Noise by Kahneman et al.: In writing about noise in human judgment, the authors didn’t wrestle with the problem of noise in behavioral-science research. But behavioral-science research is the product of human judgment.

Here it is. This should interest some of you. Gaurav makes a convincing case that:

1. The main topic of the book—capriciousness in human judgment—is important, it’s worth a book, and the authors (Kahneman, Sibony, and Sunstein) have an interesting take on it.

2. Their recommendations are based on a selective and uncritical review of an often weak literature, for example this description of a study which seems about the closest thing possible to a Brian Wansink paper without actually being by Brian Wansink:

“When calories are on the left, consumers receive that information first and evidently think ‘a lot of calories!’ or ‘not so many calories!’ before they see the item. Their initial positive or negative reaction greatly affects their choices. By contrast, when people see the food item first, they apparently think ‘delicious!’ or ‘not so great!’ before they see the calorie label. Here again, their initial reaction greatly affects their choices. This hypothesis is supported by the authors’ finding that for Hebrew speakers, who read right to left, the calorie label has a significantly larger impact..”

Kinda stunning that they could write this with a straight face, given all we’ve heard about the Cornell Food and Brand Lab, etc.

In writing about noise in human judgment, Kahneman, Sibony, and Sunstein didn’t wrestle with the problem of noise in behavioral-science research. But behavioral-science research is the product of human judgment.

Here are my comments on the book and its promotional material from last year. I was pretty frustrated with the authors’ apparent unfamiliarity with the literature on variation and noise in statistics and economics associated with very famous figures such as W. E. Deming and Fischer Black. In his review, Gaurav persuaded me that the authors of Noise were on to something interesting, which makes me even sadder that they plowed ahead without more reflection and care. Maybe in the future someone can follow up with an article or book on the topic with the virtues of point 1 above and without the defects of point 2.

Actually, maybe Gaurav can do this! A book’s a lot, but an article fleshing out point 1 in a positive way, without getting snowed by noisy evidence or bragging about “discovering a new continent,” actually linking the theme of noise in human judgment to the challenges of interpreting research results . . . this could be really useful. So I’m glad he took the trouble to read the book and write his review.

6 thoughts on “Gaurav Sood’s review of the book Noise by Kahneman et al.: In writing about noise in human judgment, the authors didn’t wrestle with the problem of noise in behavioral-science research. But behavioral-science research is the product of human judgment.

  1. “the authors didn’t wrestle with the problem of noise in behavioral-science research. ”

    Just noise? :) IMO “noisy” is the high-quality subset of measurements from behavioral-science research devices. The other subsets include “faulty”, “biased”, and “totally screwed”.

    STATISTICAL ENVIRONMENTAL WARNING. HIGH DANGER! Stocks expected to open down today. Readers should avoid ocean swimming and driving. Politicians anticipating a loss in November, please consult your health care provider for important statistical health considerations.

  2. Here is the tl:dr on the calorie study from Cass Sunstein:

    https://www.bnnbloomberg.ca/putting-the-calorie-count-before-the-cheeseburger-1.1226589.amp.html

    This caught my eye:

    “There are broader lessons here. Behavioral economist and Nobel laureate Richard Thaler emphasizes the importance of “supposedly irrelevant factors” – design details that ought not to have an impact, but that can make all the difference.”

    One wonders whether the word “supposedly” here – rather than seemingly – is a direct shot across Andrew’s bow, since if you google “piranha problem” it takes you right back to this blog.

    • Matt:

      I doubt that Sunstein’s column is in any way a response to me. I think he’s a true believer, and it seems that he learned nothing from the examples of Wansink, Ariely, etc. You know the saying: If you want to fool other people, the first step is to fool yourself.

    • Gaurav writes, ” It is useful to think about human judgment systems as mathematical functions. We should expect the same inputs to map to the same output. It turns out that it isn’t even remotely true in most human decision-making systems.”

      I concede that it can be useful to think of human judgment as mathematical functions, but I do not see why we should expect all of our human judgments to be functional. In fact, it seems to me that many of our concepts that enter into our decision making processes are not functional. It may not be the fault of the decision maker, but just the nature of the decision that there is more than one “right” answer to the question. There are certain areas where that might really bother us, like in the dispensation of justice. But, there are areas where it ought to bother us that people are trying to human decision making functional, like is accessing risk because we don’t know the future. What is the problem with people assessing risks differently. To the contrary, since there isn’t a correct answer, we ought to have a diversity of assessments or we all make the same mistake at the same time which is much worse than having a few of us wrong all the time.

Leave a Reply

Your email address will not be published. Required fields are marked *