Skip to content

Tali Sharot responds to my comments on a recent op-ed

Yesterday I posted some comments on an op-ed by by Tali Sharot and Cass Sunstein.

Sharot sent the following response:

I wanted to correct a few inaccuracies, which two of your commenters were quick to catch (Jeff and Dale).

It seems you have 3 objections

1. “Participants did not learn about others’ opinions. There were no others; these were algorithms.”

The participants believed these algorithms were other people; we verified this with a questionnaire. It is very common to give information to participants as if it were coming from other people in order to control for different variables. Because participants believed these were other people it is a safe assumption that their responses would be the same if the answers they received were actually from other people. (Dale makes the same point)

2. “And, no, there was no way to learn about anyone’s ability to categorize geometric shapes, as all these categorizations were purely random.”

This is a misunderstanding. The categorizations of the potential (simulated) advisers were not random but were instead designed to consistently answer correctly either 80% of the time or 50% of the time. If participants sought and followed advice of the accurate advisers in Stage 2, their performance would thus be much better than chance.

3. You do not like the fact that we suggest our study says something about how people assess doctors, architects and so on.

I also dislike the title that was selected by the editor, as it creates an impression that the whole op-ed is about doctors, rather than just the first couple of sentence of the opening example. That being said, it is important to explain how a specific research project may be applied to broader issues and it is routine to use real-world examples to illustrate in-lab tasks. For example, if we were conducting a study on perceptual attention using a task with moving dots on screen it would be reasonable to begin an op-ed by saying “imagine that you are driving and need to direct attention to the car on your left”. I agree with Dale again – it is not a big leap, and scientists should explain how their research can apply to real life situations.

P.S. Also see the P.P.P.S. in my previous post.


  1. Zad Chow says:

    Thanks for sharing the correspondence

  2. “it is a safe assumption that their responses would be the same if the answers they received were actually from other people.”

    This issue is at the heart of the methodology of experimental economics. It has been extensively discussed, and empirically examined over the past three decades. Briefly, the conclusion is, no, this assumption is not safe. See here for a review:

  3. Coincidentally, I reread Cass Sunstein’s Wiser this last Thursday evening. There was brief mention of the ‘halo effect’on pages 22, 47, and 48 although obliquely within the context of the representative heuristic.

    Wiki contains several definitions of the ‘halo effect: but disproportionately emphasizes the role of ‘attractiveness’in judgment.

    This Wiki definition is similar to the one contained in the article. : ‘The halo effect can also be explained as the behavior (usually unconscious) of using evaluations based on things unrelated, to make judgments about something or someone.’

    From Sharot and Sunstein’s NYT Opinion:

    ‘Instead, they most often chose to hear about blaps from co-players who were politically like-minded, even when those with different political views were much better at the task…

    In short, people sought and then followed the advice of those who shared their political opinions on issues that had nothing to do with politics, even when they had all the information they needed to understand that this was a bad strategy….

    ‘Why? This may be an example of what social scientists call the halo effect: If people think that products or people are good along one dimension, they tend to think that they are good along other, unrelated dimensions as well. People make a positive assessment of those who share their political convictions, and that positive assessment spills over into evaluation of other, irrelevant characteristics.’
    While I’m quite sure that the ‘halo effect’ could be a factor in the result cited above, I am not sure that, as a generalization, it holds only as unconscious bias.

    In my experience the bias can be a strategic power or reputational ploy to give an edge to politically likeminded, even at the expense of being inaccurate. This is my experience of high school cliques and adult political circles. In the extreme, one can characterize such a behavioral strategy as entailing partisanship and polarization. But it can be a deliberate calculation too.

    And that is why partisanship and polarization are so dangerous as the article goes on to suggest. After all in political campaigns, spreading false news is just part of the landscape.

  4. Wow I don’t see my earlier post on this thread.

  5. Thanatos Savehn says:

    Well, I suspect a category error. But then I’m deep into “Statistical Rethinking” and wondering if information theory’s explanation of the otherwise surprising prevalence of normal distributions is, after all, simply a reflection of the transition from one label to another. Enough of that. Once more unto the breach dear friends, once more.

  6. Dale Lehman says:

    Now, to come full circle – and somewhat back into Andrew’s critical view. The title of the op ed can be faulted, as the author’s seem to share – journalists need to start taking some responsibility for how they report things, and not just hide behind the excuse that things need to entice readers. But, the author’s do step over the line when they say:

    “Knowing a person’s political leanings should not affect your assessment of how good a doctor she is — or whether she is likely to be a good accountant or a talented architect. But in practice, does it?

    Recently we conducted an experiment to answer that question.”

    I do think this is irresponsible. A bit more humility is in order – along the lines of “we recently conducted an experiment that might provide insight into questions such as this.” Yes, that is a slight difference and a bit cumbersome in wording. But I do think that it matters. Once authors start facilitating the journalistic hype cycle, we become part of it. I don’t think there is a need to avoid referring to choosing doctors, etc. even though they were not part of the experiment – as I said before, I think this is a reasonable leap of interpretation. But there is no need to pretend that the study “answers” such a question. That is a step down a slippery slope (must be a cat picture for that somewhere).

    • Andrew says:


      A quick google turns up this.

    • jason_farnon says:

      “I don’t think there is a need to avoid referring to choosing doctors, etc.”

      Well take a look at the “top reader picked comments” on the Times article. Of all the first 30 or so that I looked at, every single one focused on the doctor hook. They are variants of “I would never go to a republican doctor because of ___”. Zero interests among these comments in the merits of study design or the biological mechanisms implicated. So surely you can appreciate the business incentives of adding in this kind of speculation, compared with disincentives like being criticized on this blog. (No offense to this blog.)

      • Actually most NYT opinion column readers may not have sufficient background to assess the merits of study design nor biological mechanisms.

        I’ve followed Cass Sunstein’s scholarship [15 years] due to my long term interest in groupthink and policy formation. And if one reads Cass Sunstein’s books and papers, one has greater appreciation where Cass is coming from. The good result of his work stems, imo, from giving eclectics like us some voice in the echo chambers that had formed. For that I, at least, am grateful. Cass and I may not agree on policy sometimes; but we do appreciate the impediments to rigorous decision making and the efforts that can marginalize or demonize individuals who have very good empirical [or unique] bases for their viewpoints.

  7. Andrew does raise an important point about the perception that the article was about doctors. Yet it highlights that very educated people may yield to partisanship & polarization. One has to establish base rates in order to make an accurate evaluation. In the 1st instance, I don’t know what % of doctors is Republican or Democrat. Second, as patients, do we ask our doctors if they are Republican or Democrat? I don’t. I’m keen on the doctor’s approach to medicine and the degree to which the doctor communicates the diagnosis/prognosis cogently and accurately. That is about competence.

    In so far as the reference to ‘halo effect’, I think that it applies in the sense that our entire politics, medical industry, the entire knowledge acquisition system has translated into a commercial marketing gambit. So naturally, enterprises select leaders [Corporate Top Management, Presidents, Salesperson, marketing professionals, etc] who are attractive and communicate well on television. They may or may not be all that original, analytical, etc. They have their scripts to convey to audiences. So I mean to suggest that we as audiences, constituencies buy into that marketing model without giving it as much scrutiny as we should.

  8. Yes people may have followed it on that basis. But the finding also depends on which connotation of the halo effect’you adopt.
    Do you think that the participants were exercising their preferences on an ‘unconscious’ level? How do we disceern that?

    In the real world, my observation, in the political sphere, the halo effect is not the result of an unconscious bias rather it seems a deliberative strategy to maintain group solidarity. Of course context matters in each case. And there may be exceptions. More broadly these results issue from partisanship & polarization.

  9. Oops I was responding to Tracy who looks to have been spam & deleted. LOL Sorry about that.

Leave a Reply to Sandro Ambuehl