Skip to content

Replication police methodological terrorism stasi nudge shoot the messenger wtf

Cute quote:

(The link comes from Stuart Richie.) Sunstein later clarified:

I’ll take Sunstein’s word that he no longer thinks it’s funny to attack people who work for open science and say that they’re just like people who spread disinformation. I have no idea what Sunstein thinks the “grain of truth” is, but I guess that’s his problem.

Last word on this particular analogy comes from Nick Brown:

The bigger question

The bigger question is: What the hell is going on here? I assume that Sunstein doesn’t think that “good people doing good and important work” would be Stasi in another life. Also, I don’t know who are “the replication police.” After all, it’s Cass Sunstein and Brian Wansink, not Nick Brown, Anna Dreber, Uri Simonson, etc., who’ve been appointed to policymaking positions within the U.S. government.

What this looks like to me is a sort of alliance of celebrities. The so-called “replication police” aren’t police at all—unlike the Stasi, they have no legal authority or military power. Perhaps even more relevant, the replication movement is all about openness, whereas the defenders of shaky science are often shifty about their data, their analyses, and their review processes. If you want a better political analogy, how about this:

The open-science movement is like the free press. It’s not perfect, but when it works it can be one of the few checks against powerful people and institutions.

I couldn’t fit in Stasi or terrorists here, but that’s part of the point: Brown, Dreber, Simonsohn, etc., are not violent terrorists, and they’re not spreading disinformation. Rather, they’re telling, and disseminating truths that are unpleasant to some well-connected people.

Following the above-linked thread led me to this excerpt that Darren Dahly noticed from Sunstein’s book Nudge:

Jeez. Citing Wansink . . . ok, sure, back in the day, nobody knew that those publications were so flawed. But to describe Wansink’s experiments as “masterpieces” . . . what’s with that? I guess I understand, kind of. It’s the fellowship of the celebrities. Academic bestselling authors gotta stick together, right?

Several problems with science reporting, all in one place

I’d like to focus on one particular passage from Sunstein’s reporting on Wansink:

Wansink asked the recipients of the big bucket whether they might have eaten more because of the size of their bucket. Most denied the possibility, saying, “Things like that don’t trick me.” But they were wrong.

This quote illustrates several problems with science reporting:

1. Personalization; scientist-as-hero. It’s all Wansink, Wansink, Wansink. As if he did the whole study himself. As we now know, Wansink was the publicity man, not the detail man. I don’t know if these studies had anyone attending to detail, at least when it came to data collection and analysis. But, again, the larger point is that the scientist-as-hero narrative has problems.

2. Neglect of variation. Even if the study were reported and analyzed correctly, it could still be that the subset of people who said they were not influenced by the size of the bucket were not influenced. You can’t know, based on the data collected in this between-person study. We’ve discussed this general point before: it’s a statistical error to assume that an average pattern applies to everyone, or even to most people.

3. The claim that people are easily fooled. Gerd Gigerenzer has written about this a lot: There’s a lot of work being done by psychologists, economists, etc., sending the message that people are stupid and easily led astray by irrelevant stimuli. The implication is that democratic theory is wrong, that votes are determined by shark attacks, college football games, and menstrual cycles, so maybe we, the voters, can’t be reasoned with directly, we just have to be . . . nudged.

It’s frustrating to me how a commentator such as Sunstein is so ready to believe that participants in that popcorn experiments were “wrong” and then at the same time so quick to attack advocates for open science. If the open science movement had been around fifteen years ago, maybe Sunstein and lots of others wouldn’t have been conned. Not being conned is a good thing, no?

P.S. I checked Sunstein’s twitter feed to see if there was more on this Stasi thing. I couldn’t find anything, but I did notice this link to a news article he wrote, evaluating the president’s performance based on the stock market (“In terms of the Dow, 2018 was also pretty awful, with a 5.6 percent decline — the worst since 2008.”) Is that for real??

P.P.S. Look. We all make mistakes. I’m sure Sunstein is well-intentioned, just as I’m sure that the people who call us “terrorists” etc. are well-intentioned, etc. It’s just . . . openness is a good thing! To look at people who work for openness and analogize them to spies whose entire existence is based on secrecy and lies . . . that’s really some screwed-up thinking. When you’re turned around that far, it’s time to reassess, not just issue semi-apologies indicating that you think there’s a “grain of truth” to your attack. We’re all on the same side here, right?

P.P.P.S. Let me further clarify.

Bringing up Sunstein’s 2008 endorsement of Wansink is not a “gotcha.”

Back then, I probably believed all those sorts of claims too. As I’ve written in great detail, the past decade has seen a general rise in sophistication regarding published social science research, and there’s lots of stuff I believed back then, that I wouldn’t trust anymore. Sunstein fell for the hot hand fallacy fallacy too, but then again so did I!

Here’s the point. From one standpoint, Brian Wansink and Cass Sunstein are similar: They’re both well-funded, NPR-beloved Ivy League professors who’ve written best-selling books. They go on TV. They influence government policy. They’re public intellectuals!

But from another perspective, Wansink and Sunstein are completely different. Sunstein cares about evidence, Wansink shows no evidence of caring about evidence. When Sunstein learns he made a mistake, he corrects it. When Wansink learns he made a mistake, he muddies the waters.

I think the differences between Sunstein and Wansink are more important than the similarities. I wish Sunstein would see this too. I wish he’d see that the scientists and journalists who want to open things up, to share data, to reveal their own mistakes as well as those of others, are on his side. And the sloppy researchers, those who resist open data, open methods, and open discussion, are not.

To put it another way: I’m disturbed that an influential figure such as Sunstein thinks that the junk science produced Brian Wansink and other purveyors of unreplicable research are “masterpieces,” while he thinks it’s “funny” with “a grain of truth” to label careful, thoughtful analysts such as Brown, Dreber, Simonson as “Stasi.” Dude’s picking the wrong side on this one.


  1. D Kane says:

    > The claim that people are easily fooled.

    I agree that this is key. But so is the next step: Because people are easily fooled (and often too stupid to know what is best for them), someone else needs to nudge/force them to act differently.

    The problem for Sunstein, and others like him, is that they have a strong prior that nudging is good and that the US government ought to do more of it. They don’t like “evidence” which challenges that prior. It would be one thing if open science and the replication movement showed that social science research made mistakes on both sides of the debate, showed that claims about people being intelligent were just as likely to be garbage as Wasinkian claims about people being stupid. But that (sadly?) has not been the case . . .

    Research supportive of the nudge school of policy making has been much more undermined by open science/replication than research antagonistic toward it. If open science/replication movement had never happened, the nudge movement today would be stronger than, in fact, it is.

    > I wish he’d see that the scientists and journalists who want to open things up, to share data, to reveal their own mistakes as well as those of others, are on his side.

    You aren’t on Sunstein’s side. And he knows it. And that’s why he thinks — “grain of truth” — that you (we!) are like the Stasi. They (like you) are on the other side.

    • Andrew says:


      1. I agree with you that it seems that Sunstein thinks of himself and his friends as the nudgers, not the nudgees. That’s a big problem. I’d just amend your statement slightly to say that it’s not just the U.S. government who they imagine doing the nudging. It’s also other governments, and also companies. I imagine lots of the applications of nudging would be for advertising and marketing.

      2. I agree that lots of the work of Brown, Dreber, Simonson, myself, and others has undermined the large claims made for “nudging.” But that doesn’t mean we’re not on Sunstein’s side. Sunstein is a social scientist as well as a policy entrepreneur. The discrediting of Wansink etc. indeed has, and should have, the effect of diminishing the appeal of “nudge,” and that’s bad for Sunstein and his friends in the short term. Long-term, though, I assume that Sunstein is promoting social priming etc. as a way of increasing the public good, and if these nudges don’t really work as advertised, I guess he’d want to know that. In that sense, Brown, Dreber, Simonson, etc. are doing Sunstein a favor by letting him know sooner, rather than later, that he’s going down a blind alley.

      • D Kane says:

        > doesn’t mean we’re not on Sunstein’s side

        But Sunstein disagrees. And, ultimately, doesn’t he get to decide who is on his side?

        Consider this metaphor. Imagine that I am a scientist actively involved in the fight against malaria. I think that malaria kills lots of people, that this is bad, that we are making progress against it and that we should spend more money in the effort to defeat it. There are thousands of scientist involved in this work, but I am a TED-talk giving, NPR-interviewed, Ivy league-tenured leader in the anti-malaria fight.

        You, a nasty nobody open science/replication zealot come along and show that one (or even a handful) of these thousand scientist is doing something wrong, data-fudging or whatever. You really expect me to thank you? You really think we are on the same side? No! You are naive! Your efforts — while correct in their conclusions — set back the fight against malaria. You make it too easy for the malaria-deniers to malign the efforts of the 1,000+ honest scientists. Your efforts decrease future funding for anti-malaria research. You are, more or less, responsible for thousands of deaths.

        And you think you are on my side? Ha!

        If you were really on my side, you would have reached out to me privately, talked with members of the club in confidence, guided things behind the scenes, so that we could fix things without putting the larger cause of anti-malarial research and funding at risk. But, no, you couldn’t do that. You had to make a big stink.

        Obviously: I complete disagree with Sunstein on the substance, but it is easy for me to understand why he does not think that you (we!) are on his side.

        • Andrew says:


          Could be. But lots of people contacted Wansink privately (not because they were trying to spare his reputation, but just because they had direct questions about his data and research methods and they naively thought he’d be interested in helping them clear things up), and it didn’t work at all. Nothing really worked until the news media got engaged, and even Cornell University decided to stop covering for the guy.

          Based on my experiences, I don’t think the “talk with members of the club in confidence” strategy is such a good idea. I mean, sure, when people talk with me in confidence (or publicly), I’m motivated to figure out what went wrong, but lots of researchers don’t seem to think this way.

          Similar issues arise when we’ve criticize extreme claims that have been made regarding the benefits of early childhood interventions. One might argue that the potential benefits are important enough that it’s a good idea for researchers to lie, or at least to exaggerate the benefits, in order to influence policy. This is not a position I’d like to take, but, from a purely consequentialist point of view, who knows, it could be better.

          Anyway, I think you’ve identified an inconsistency in my thinking. On one hand, I have a view that researchers can’t be trusted to do the right thing on their own, hence the value of post-publication review, open data/methods/criticism, etc. On the other hand, I feel at some gut level that researchers and pundits ultimately want to get things right and thus should be appreciative of outsiders who point out their errors.

          The thing that frustrates me about people like Cass Sunstein, David Brooks, Susan Fiske, etc., is that they are so secure in their careers that they could afford to admit their mistakes. Sure, each of them has personally benefited, and continues to benefit, from the credibility that’s been to various forms of junk science and sloppy analysis. So, yeah, if they were to admit the problems with such work, they’d have to take a reputational hit, and this could reduce book sales, lecture fees, ability to promote their proteges’ careers, etc. Real losses. But none of these people is in danger of losing their job, their livelihood, or even much of their social influence. To me, these people are like various super-rich people who go ballistic over the prospect of paying 20% more in taxes. They’d still be rich, for chrissake!

          But I guess the analogy to taxation helps to explain the problem, as there are lots of reason that many rich people don’t want to pay more taxes: (1) they feel they earned the money and they don’t think it’s fair for them to have to share it, (2) they’re worried about the slippery slope leading to expropriation, (3) they don’t want to reduce their standard of living even one bit, and (4) they don’t trust the government to spend the money wisely. The analogy for researchers and pundits would be: (1) they feel they earned their status and they don’t feel it’s fair for them to have to share it with “second-stringers” etc., (2) they’re worried about the slippery slope leading them to get no respect at all, (3) they don’t want to take a hit on book sales, NPR appearances, etc., and (4) they don’t think that people like Brown, Dreber, Simonson, etc., know anything useful.

          • Of course it’s not a good idea, but it’s the only idea that would indicate that you’re “on his side” because basically you’re falling over yourself to make sure to uphold his reputation and keep him from being disgraced by his own crappy research.

          • D Kane says:

            > The thing that frustrates me about people like Cass Sunstein, David Brooks, Susan Fiske, etc., is that they are so secure in their careers that they could afford to admit their mistakes.

            > To me, these people are like various super-rich people who go ballistic over the prospect of paying 20% more in taxes. They’d still be rich, for chrissake!

            There is an old aphorism in finance which may be applicable.

            Statement: “Those billionaires are so greedy! I would stop working if I had $100 million.”
            Reply: “That is why you don’t have $100 million.”

            Might the case with Sunstein et al be similar? None of these people got where they are by admitting “problems” with the work they relied on or by wanting “to get things right” by looking closely at work which supported their views. Indeed, the more careful you were (and are!?) as a junior columnist or academic, the less likely you were to get to where they have gotten today.

            As I often comment to my students: Without p < 0.05, this paper would not have been published. Without this and other publication, this junior faculty would not have gotten tenure. Given those constraints/incentives, what prior should we have as to the quality/robustness of the work?

        • D:
          Sadly you have captured my experiences with mediocre clinical researchers (which are most common) I worked with when in academia.

          As a former head of public health once commented – you can spot the really smart people when someone raises a serious criticism when they present their research in public – and they smile. They realize they have just learned something they are wrong about and can focus on addressing it.

          Maybe the “If you were really on my side, you would have reached out to me privately, talked with members of the club in confidence” is part of an impostor syndrome prevalent in the well-funded, NPR-beloved Ivy League of professors?

    • Gerd Gigerenzer’s Risk Savvy is an antidote to Nudge by Richard Thaler and Cass Sunstein. Gigerenzer would prefer that consumers/patients be taught statistical literacy directly to empower them to make more accurate decisions. I wholeheartedly concur with Gigerenzer.

      I’ve read most of Cass Sunstein’s books, some of which are quite excellent. In particular Constitution of Many Minds and Rumors. Cass Sunstein is one of the few that can read my skepticism, without my being explicit. I may be wrong on that. But I’ve been vocal on some issues in foreign policy circles.

      There has been considerable criticism of Cass Sunstein in science & government circles. But I haven’t delved into his stances deep enough to evaluate the challenges put to him.

      He has another book title out: Conformity. I haven’t read it yet.

    • jim says:

      “they have a strong prior”

      No doubt! LOL when I read about the refilling soup bowls!

      For me, for sure, if I got a 6oz soup bowl that refilled itself without me knowing I’d just go right on eating gallon after gallon of soup, never noticing. You bet!

      No doubt I’d also hardly notice as the chunks of chicken and noodle splurted into the small bowl from the small tube at the bottom…

      There’s no way he even did that! That has to be a complete fabrication.

      • Mary Kuhner says:

        If you try to imagine designing the refilling soup bowl, it gets harder and harder the more you think about it. The soup has to be entering the bowl at exactly the right rate. If the diner stops eating to chat, it won’t do to have his bowl overflow onto the table. If he eats abnormally fast, it won’t do for him to get to the bottom and see the little tube. You *could* probably make such a thing, with sensors, but there’s no indication that they did; and you’d be troubleshooting it for *ever*. If the soup were too thin, you’d see the tubes; too thick, it would clog. Too much pressure, too little pressure, negative pressure–the soup goes away!–and it can’t make noise, can’t vibrate, and what do you do about people who try to pick up the bowl or move it?

        If you really did this, I think you would describe it in your paper, because it would be a TON of work and you’d want to share how painful it was.

        Wansink illustrates this experiment in his talks with a clearly impossible cartoon of a funnel below the level of the table, into which an experimenter is pouring soup that magically ends up in the bowl on the table. Of course the real apparatus can’t look like that, but the only photos of it just show the bowl, with a small grommet in the middle for the soup to enter. The grommet is kind of conspicuous, but maybe with an opaque soup, I dunno. I don’t think they really did this experiment. They got as far as making the bowls and stuff, but then it was too hard to get it to work, and they gave up. This would explain why an experimental design with 2 bottomless and 2 non-bottomless subjects per table ended up with 23 controls and 31 manipulations (as pointed out by James Heathers) and no mention of exclusions….

      • Andrew says:

        Jim, Mary:

        I searched the internet and found a photo of the refilling soup bowl! Go to 2:36 at this video. There are two coauthors on that paper: James E. Painter and Jill North. So maybe one of them could be asked if the experiment ever happened.

        See also this video with actors (Cornell students, perhaps?) which purports to demonstrate how the bowl could be set up in a restaurant. The video is obviously fake so it doesn’t give me any sense of how they could’ve done it in real life.

        I also found this video where Wansink demonstrates the refilling bowl. But this bowl is attached to the table so I don’t see how it could ever be delivered to someone sitting at a restaurant.

        What’s funny to me is that, in all my years hearing about this story, I never reflected on the possibility that the entire experiment was made up.

        • Jordan Anaya says:

          They were instructed not to move the bowl (which theoretically could prevent them from realizing it was attached to the table):
          “They were instructed to eat the soup and not to move the placement of the bowl from its predesignated place on the table.”

          I don’t know, if someone told me not to move something I might be tempted to move it, or at least I might be very interested in what’s special about the bowl.

  2. Z says:

    I don’t think when Sunstein said “Stasi” he meant to evoke secrecy, lies, and disinformation at all. These may be salient properties of the Stasi, but I’m certain that for the purposes of Sunstein’s joke the Stasi are meant to represent solely a zeal for finding offenders and punishing them harshly, with this zeal existing independent of the nature of the offenses for which people are punished. He’s saying, “The replication movement is fighting for a good cause, but many of its members seem drawn to the movement by the prospect of bringing down scientists/work that fails to replicate as much as by the prospect of improving science.”

    This could well be true (the “kernel of truth” that Sunstein mentions in his apology tweet). I’m not in a position to judge. If it is true, then it seems like there’s no real problem with the tweet as an independent observation. Rather, the problem is that the tweet exists in an atmosphere where many seek to kneecap the open science movement by painting it as a collection of “terrorists”. Sunstein’s tweet promotes this weaponized characterization and is therefore harmful even if it does contain a “kernel of truth”.

    • Andrew says:


      Interesting point. But remember the Javert paradox: The people who went to the trouble of figuring out all the problems with Wansink’s work were criticized for putting so much effort into it. Stasi-like, I guess? But without that effort, I suspect we’d still be thinking Wansink did good work. So the critics, at some personal cost, uncovered fatal flaws in work that Sunstein had ridiculously (in retrospect) labeled as “masterpieces.” Their thanks for this important work? To be labeled as “Stasi.”

      • Z says:

        Yeah, if Sunstein were appropriately grateful for the work of the open science movement, he wouldn’t fixate so much on what may be some confrontational personalities in its ranks. And not only that, I’m sure he’s aware of the dynamic that exists where jokes like his (especially by powerful people like him) serve to discredit open science work. So the tweet was at best irresponsible and at worst an intentional attempt to discredit the open science agenda dressed up as a joke. In short, I don’t think he would have tweeted that if he genuinely appreciated open science.

    • jim says:

      “I don’t think when Sunstein said “Stasi” he meant to evoke secrecy, lies, and disinformation at all.”

      I agree w/ this. Most colloquial references to the stazi or Gestapo or whatever are intended in the more banal sense, equivalent to “nasty people”, rather than “cold blooded killers”

      • Andrew says:


        To me, when someone says “Stasi,” I think not of cold-blooded killers but of bureaucrats that pay people to inform on their neighbors.

        Here’s where I differ from Sunstein. He seems to think of correcting errors in published research as comparable to informing on neighbors in a communist state. I think of correcting errors in published research (often research that has been hyped by the news media and has been funded by taxpayer dollars) as an essential part of science.

        One key difference is that the Stasi acted in secret, while science reformers act openly.

        Oddly enough, it seems that Sunstein used to be a fan of whistleblowers; see this book review by Brian Martin from 2004:

        “If only those complainers would just get in line, then we could get on with the task and be more effective.” Have you ever heard this sort of comment? The underlying assumption is that agreement, cooperation, consensus, conformity – whatever term you want to use – is beneficial for the group. Consequently, those who challenge orthodoxy are deemed to be selfish.

        Actually, the reality is exactly opposite, according to a readable book by Cass R. Sunstein titled Why Societies Need Dissent (Harvard University Press, 2003). Sunstein says that “Much of the time, dissenters benefit others, while conformists benefit themselves.” (p. 6) Whistleblowers certainly know that they seldom benefit from their disclosures; more commonly they are ruthlessly punished.

        In making the argument that dissent benefits society, Sunstein describes fascinating research on group dynamics.

        . . .

        Why Societies Need Dissent concludes with this statement: “Well-functioning societies take steps to discourage conformity and to promote dissent. They do this partly to protect the rights of dissenters, but mostly to protect interests of their own” (p. 213).

        I wonder what changed for Sunstein between 2003 and 2019. In 2003 he was taking the position of the general public; in 2019 he’s taking the position of his friends who have something to lose from having their work scrutinized.

  3. Anonymous says:

    Quote from the blogpost: “The claim that people are easily fooled. Gerd Gigerenzer has written about this a lot: There’s a lot of work being done by psychologists, economists, etc., sending the message that people are stupid and easily led astray by irrelevant stimuli.”

    I think (parts of) this “open science” thing, and (some) people in it, might be just as much about “nudging” as (the research of) people like Sunstein, and Wansink.

    For instance see the “open practices badges” which i think fit nicely in this whole “nudging” literature (perhaps including a possibly flawed effectiveness study: see

    I also think proposed large scale “collaborative” projects that are currently being associated with “open science” (for some reason) are “nudging” in a way as well. If i am not mistaken, a small group of people decide(d) which “Registered Replication Reports” were performed for instance. If you then ask people to join your replication projects because “we have to collaborate” (for some reason), you are effectively “nudging” folks in my view.

    I also reason that you could (if you would want to) pick and choose only certain types of research to be executed this way, and use this to paint a certain picture. Is that a form of “nudging”? You could then subsequently use this “crisis” to in turn do lots of other things that are perhaps all about “nudging”, or perhaps even “control”.

    I think it’s important to always try and talk about science, scientific things, and what, why, and how something is (“good” for) science.

    • Anonymous:

      In spite of good intentions these things do plant future “group thinks” that some will take career advantage of.

      • Anonymous says:

        Quote from above: “In spite of good intentions these things do plant future “group thinks” that some will take career advantage of.”

        Yes, this is what i am trying to warn for. I think this is already happening.

        Coming up with the next “this is going to improve things” project is already becoming the “sexy paper + subsequent book deal” of a decade ago in my view. Lots of money is (in my view often unnecessarily) involved with these kinds of (often viewed as “collaborative”) projects, of which universities may get a direct cut (for “reasons”). This is surely a way for un-tenured “researchers” to get in on the action i would think!

        I also think it’s important to be aware of the possiblity of “group think”, which i think is already happening as well. It’s not “good” science necessarily if a majority thinks it’s “okay” (e.g. see appeal to the majority, or “argumentum ad populum”). It’s also not “good” science to sort of “over-power” other researchers who think differently, of want to raise points, by “ganging up” on them because you have a group of people who think like you.

        It’s also not “good” science necessarily if something, or someone, is being associated with “open science”. Sometimes something that is associated with “open science” is in fact not even very “open” and “transparent” at all, e.g. see Hardwicke & Ioannidis, 2018, “Mapping the universe of Registered Reports”.

        I think these are all possibly crucial things to be mindful of in science, and scientific discourse.

      • Yes, that may be the case Keith. I am astonished sometimes how strident the dynamics are. I find them somewhat corny. I don’t recall though my father’s generation of academics so snippy though, although there were politics among his colleagues that were weary. Some are able to deal with all kinds of people, which is a real talent.

  4. Wonks Anonymous says:

    “Sunstein thinks that the junk science produced Brian Wansink and other purveyors of unreplicable research are “masterpieces,””
    He thought that, in the past tense, when Nudge was published and prior to Wansink’s downfall.

    • Anoneuoid says:

      Has this guy ever run a replication in his career?

      I ask because the only reason these “police” exist is because the people in these fields have not been doing their job of checking each others work.

    • Andrew says:


      I dunno. Sunstein seems more angry at people who, by exposing junk science, are allowing him to make more sensible policy recommendations, than people such as Wansink who produced the junk science that can lead to poor recommendations. Seems like screwed-up priorities to me.

      • If your goal is power, being right or wrong is irrelevant, what you want is to be *seen as right*.

        • Andrew says:


          Sure, but I can only assume that Sunstein’s ultimate goal is to make the world a better place, not that he’s a Stalin-like person who wants power for its own sake. Also, Stalin arguably needed to continue to fight for power because if he lost absolute power, he was in danger of being ousted himself. As noted above, I don’t see that admission of error would put Sunstein in jeopardy of losing most of his fame, fortune, influence, etc.

          • Anoneuoid says:

            I don’t know anything about this guy but if he has books full of stuff like that excerpt about Wansink, then obviously he has built a career off uncritically accepting whatever he reads in the journals without requiring that basic scientific tools like independent replication are used.

            So the credibility is already gone.

          • > Sure, but I can only assume that Sunstein’s ultimate goal is to make the world a better place

            Why would you assume this? I mean, you could assume this about people who it’s clear have given up chances to benefit personally in order to make things better for others (Richard Stallman for example), they are sort of “putting their money where their mouth is” so to speak, but we’re talking about someone where every ounce of his career history screams “I want to be seen as a top expert and have people dote on my every word”


          • Put another way, Andrew, what is the purpose of doing bad science? Why design some crappy study of Accupuncture vs allergy medicine, or do some junk study of the size of stale popcorn buckets, or massive un-calibratable computational studies of the circulation of trash in the ocean or whatever? These things have no hope in hell of telling us anything reliable about the actual world…

            The purpose of unverifiable, poorly designed, badly thought out science is to *leverage the scientific method as a political tool* to give the people with the “peer reviewed journal articles” power over those who don’t have peer reviewed journal articles, or to garner money and support for an in-group vs an out-group.

          • I believe Cass Sunstein has commented on the anti-GMO and perhaps the anti-vaccine movements. That gets Nassim Taleb in a knot.

  5. John Hall says:

    On your PS on the Dow Jones, when you say “Is that for real??” I wasn’t sure if you were not sure the fact about the Dow Jones was true (it’s only true for the total return series, not for the price index, there was a decline in that in 2015) or if you were making the comment at Sunstein, like “are you serious, right now, that’s ridiculous.”

    Regardless, associating stock market returns to Presidents is fraught with peril. It’s one thing to do an event study, like “politician announces new plan, market declines,” it’s another to take averages like that. For 2018, the decline came at the very end of the year with the Fed hiking interest rates and the market getting spooked. I mean, you could fault Trump for appointing the Fed chair, but if anything Trump was telling him not to raise rates and he did anyway. So I feel like you can blame the market’s decline more on the Fed than Trump. Especially since it turned on a dime in January 2019 when Powell reversed himself.

    • Andrew says:


      What I thought ridiculous was the idea of evaluating presidents’ performance based on the stock market. I mean, sure, sometimes there’s a stock market crash and you can blame the government for feeding the bubble or for not reacting well after the crash—but to take a 5.6% decline in the market in one year and take this as a negative evaluation of the president, that just seems silly to me.

  6. Peter Dorman says:

    I agree with Z that there’s a psychological claim implicit in Sunstein’s tweet. Another is this: the political/intellectual position for which Sunstein is a leading spokesperson sees the world divided between the uninformed and easily misled many and the informed and dependable few. I suppose you could trace this to Plato (not sure about the Chinese), but in American history the big name is Walter Lippmann. The modern version is built around the psychological and public policy literatures on heuristics, information cascades, and similar mechanisms. Before he was a Nudger, Sunstein advocated cost-benefit analysis as a weapon against both government failure (bureaucratic clumsiness, rent-seeking) and popular manias (cue Alar). At some higher level it’s all the same: a handful of technocrats, who see things clearly and objectively, need to be empowered to overcome popular error. The nudge part reflects a desire to uphold individual choice, also a deep commitment. (Minor point: in collective choice Sunstein seems to have drifted from constitutional constraints on democratic error, his 1990s position on CBA, to nudges in the form of requirements that public agencies explain why their decisions depart from CBA bottom lines.)

    What I think he objects to in the “replication police” is that it has particularly gone after researchers whose work is central to Sunstein’s paradigm. This gives him an ideological impetus for choosing sides.

  7. David says:

    I’m not sure if this counts as the “replication police” or what, but the New Republic reviewed Sunstein’s most recent book and found several passages where he, um, replicated previous books. More generally, he seems to be replicating the same ideas:


    Hence he tells us that people typically think that more words, on any given page, will end with -ing than have n as the second-to-last letter—an anecdote you would have already encountered had you made it as far as page 30 of The Cost-Benefit Revolution. He explains the Asian disease problem and provides a number of choice-framing analogies also found in The Cost-Benefit Revolution. He retells the David Foster Wallace water parable spotted on page eleven of On Freedom, published in February of this year. (Explaining the importance of the parable in that earlier book, he notes: “This is a tale about choice architecture—the environment in which choices are made. Choice architecture is inevitable, whether or not we see it, and it affects our choices. It is the equivalent of water.” Fast forward a few months to the publication of How Change Happens, and this gloss has become: “This is a tale about choice architecture. Such architecture is inevitable, whether or not we see it. It is the equivalent of water.”)


    This is, after all, the man who counts, Republic 2.0 and #republic on his list of published works; who wrote one paean to cost-benefit analysis called The Cost-Benefit State and another, 16 years later, called The Cost-Benefit Revolution; who followed up his 2008 blockbuster, Nudge: Improving Decisions About Health, Wealth and Happiness, which set out the case for using welfare-oriented behavioral prompts or “nudges” in the design of regulation, with 2014’s Why Nudge?, which valiantly addressed the question already answered six years earlier.

  8. D Kane says:

    > I’m sure Sunstein is well-intentioned

    At some point, his intentions become irrelevant. We can’t look into his heart. Only actions really matter.

    Andrew seems to have a (naive?) belief that, once we show Sunstein and others how useful open science/replication are, he will come around. I predict the opposite, at least in the short term. One funeral at a time, as usual.

    Two relevant cases are Iraqi mortality estimates and the “hockey stick” estimates of global temperatures. The Sunsteinian defenders of the status quo acted just as offended as Sunstein/Fiske have. They have never (?) acknowledged that open-science/replication helped us to understand reality better than if we lived in a counter-factual world without those movements. If thing X hurts the larger causes of ending the Iraq War and/or fighting climate change, then they hate thing X. Period. Intentions don’t matter.

Leave a Reply to Peter Dorman