LaCour and Green 1, This American Life 0

A couple days after listening to the segment where This American Life got conned by Mars One, I happened to listen to the This American Life segment on LaCour and Green. LaCour didn’t appear on the show but Green did. Wow, Ira Glass really got scammed. But it was a pretty elaborate con; LaCour not only had data, he had entire fake conversations between canvassers and voters. Or maybe those conversations were real? It wasn’t clear if they were supposed to be part of the research study or something done afterward, just for the show.

Glass sets up the canvassing experiment as something that shouldn’t actually work:

There’s this thing called the backfire effect. It’s been documented in all kinds of studies. It shows that when we’re confronted with evidence disapproving what we believe, generally we just dig in and we believe it more. And the rare times that people do change, it’s slow.

Ira then gives an example of one of the interviews and says:

And even more amazing than the fact that this actually worked is that it lasted. . . . And a study by researchers at UCLA and Columbia University found that a year later, not only did these voters stay convinced, they also convinced others in their own households to switch. Apparently neither of those things ever happens.

And the episode itself was called “The Incredible Rarity of Changing Your Mind.”

What did Don Green think about all this?

Professor Green says he and his colleagues have read 900 papers. And they haven’t seen anything like this result—anyone who’s changed people’s views and it lasted like this.

Ok, this is just unfortunate. If you don’t entertain the possibility that the data were faked—a possibility I did not myself consider when blogging on the paper a year ago, when I described myself as “stunned” at such “huge” effects—then, yeah, you gotta say this is newsworthy.

Now here’s something from the This American Life retraction statement:

As for the canvassers at the Leadership LAB at the Los Angeles LGBT Center, they say they were blindsided by the news of Green’s retraction. “This is a complete and utter shock to us and we’re still trying to figure out which way is up,” said Steve Deline, one of the organizers of the canvassing. “We had no idea Mike was fabricating data.”

Deline told me that the way it worked is that LaCour gave them lists of people he claimed to have signed up for the online survey. Then canvassers did their jobs and went to those houses. This took hundreds of hours. If LaCour was lying, what a waste.

So now I’m confused. The interviews all actually happened? Not all 9,507 of them, right? And if there was really no money to do the survey—apparently LaCour also lied about the funding—then who paid the canvassers? Or did they do it all for free?

Glass concludes the retraction as follows:

“Maybe the thing to convey in your blogpost” Green told me “would be something to the effect that, just because the data don’t exist to demonstrate the effectiveness of this method of changing minds, doesn’t mean the hypothesis is false. And now the real work begins.”

He thinks the canvassers should go out and do the study again, for real this time.

Wha???

Wait. One. Second.

You have a study that, if it were actually done, would be very expensive (a point Glass made in his original show), to explore a hypothesis that contradicts what’s been “documented in all kinds of studies”; there have been “900 papers” on related topics and none of them report anything like these effects; the only time this effect has been found, it turns out to have been an embarrassing fraud—and Green thinks they should go out and do the study again?

Ulp. There are lots and lots of studies people are interested in doing, and I’m sure this activist group in Los Angeles has a long to-do list. Do you really think they should spend their precious time, money, and human resources to study an idea that is contradicted by an entire 900-paper literature and whose only claim to plausibility was a made-up experiment??

“And now the real work begins,” indeed.

You gotta be kidding.

P.S. In fairness to Don Green, he gave this quote to Glass on a day when he must have been stunned by LaCour’s duplicity. I imagine that now, after several months has passed, he’d no longer think it was a wise use of resources to try to replicate this fake study.

39 thoughts on “LaCour and Green 1, This American Life 0

    • Garnett:

      Yes. What interests me is the repeated insistence on how unexpected and unprecedented this result was. This seems to happen a lot in psychological science (as well as in Psychological Science), that people argue simultaneously that a result is completely surprising and that it makes complete sense. Surprising yet ultimately plausible results to happen, of course, but by their nature they must be rare.

      • I think that you’ve said this before, but a finding that is both surprising and plausible gives a veneer of triumphant science to the work. It’s amazing how many science stories on the news or where ever have the investigators stating “We were surprised to find….” Ironically, such a statement appears to make the finding more credible!

        • Agreed. The real concern is that because there are hundreds of papers to refute this effect, we should be suspicious of atypical findings. Rather than always working to find the next counter-intuitive result, which might simply be present due to noise or luck, we should be encouraging sound science grounded in substantive problems. But don’t get me started on some of the many issues with the publication process.

          When I see a study that does not comport with hundreds of documented effects, I don’t instantly conclude that this is exciting. I think the more appropriate response should be “huh, that’s odd. I wonder why this data reveal a different effect. How is the design of this study different than others? Would it replicate? And so on.” It makes me think of the Nosek, Spies, & Motyl (2012) paper where they discuss an aberrant finding and rather than immediately writing it up for publication, they investigate it further through replication only to find that it does not hold.

      • Why is Andrew Gelman still writing about this? As his post shows he clearly does not understand the mechanics of the incident and he had no involvement in the study. This commentary is uninformative…it’s time to move on.

        • Next:

          1. As a regular listener of This American Life, I found it interesting to see how they handled this story.

          2. As a statistician and social scientist, I’m disturbed by the juxtaposition of “he and his colleagues have read 900 papers. And they haven’t seen anything like this result” with “He thinks the canvassers should go out and do the study again, for real this time.”

          I don’t see why I shouldn’t be writing on this, just because I don’t know the mechanics of the incident and had no involvement in the study.

          Or is this a new rule: only people how have involvement in a study can comment on it? Or is comment by outsiders allowed, but only positive comments? I never saw this rule anywhere.

  1. People have been too nice & polite & forgiving about Don Green & his role in this episode. Surely he deserves some blame too?

    His story sounds far too convenient. If only Green had wanted to find out, he could have found out a lot.

  2. >apparently LaCour also lied about the funding—then who paid the canvassers? Or did they do it all for free?

    Good question, this is the first I’ve heard of any survey organization admitting to taking part in this study. Maybe it was for follow-up efforts? Either another study, or general advocacy, or just longer-term follow-up? In the original critique the alleged survey firm never heard of LaCour (http://stanford.edu/~dbroock/broockman_kalla_aronow_lg_irregularities.pdf).

    • That isn’t a survey organization; it is an organization that was sending out volunteer canvassers. Those were not the interviews. Those were the treatment. What’s the new crazy then is … where did he get this contact information? I hadn’t realized that the intervention actually had been confirmed to have happened. That’s really exploitative. They were volunteers for a gay rights organization but then went out and read a recycling script based on LaCour’s design?

      From what I read about the Qualtrics statement in the Broockman, Kalla and Arnow they didn’t deny knowing him, they denied he deleted data.

    • That isn’t a survey organization; it is an organization that was sending out volunteer canvassers. Those were not the interviews. Those were the treatment. What’s the new crazy then is … where did he get this contact information? I hadn’t realized that the intervention actually had been confirmed to have happened. That’s really exploitative. They were volunteers for a gay rights organization but then went out and read a recycling script based on LaCour’s design?

      From what I read about the Qualtrics statement in the Broockman, Kalla and Arnow they didn’t deny knowing him, they denied he deleted data. Tons of people have Qualtrics accounts, especially if their university has a site license (I don’t know if they did or not).

      • Exactly. It was a sort of push polling. The gay organization thought the study was legit so they went and spent their time doing “canvassing” in order to deliberately change minds. in other words, they were trying to move the needle, not measure it. Really, it was a form of PR. But they wasted their time.

  3. I learned that This American Life uses “Article Adjective Noun Preposition Proper Noun” titling. It’s destroyed The Atlantic, with examples like: “The Deepening Mystery of the San Bernardino Shooters’ Social Media,” “The Moral Failure of Computer Scientists,” and “The Unregulated Rise of the Medical Scribe.” Thankfully, the New York Times style guide forbids it. My high school journalism teacher, Judith Forseth, would have forbidden it too.

    When diagramming the sentence structure, the whole thing becomes a single noun phrase, attempting to masquerade as something more complete. It’s the equivalent of “Trump” or “Beetles” as headlines. Headlines without verbs are duckspeak doublethink.

  4. To clarify, there are two different things going on in this “study”: (a) the canvassers going door to door and contacting people; (b) a survey conducted online of people who had been contacted by the canvassers at a later point in time, to which respondents were supposedly previously sent mailers asking them to enroll (to then take the survey at a later date, after the canvassers came).

    (a) definitely happened. It was funded by the LGBT advocacy group out of their own money, and they got quite a bit of grant money to pay for it. Lots of doors were knocked on. The conversations in this Glass segment are real, they happened.

    (b) didn’t happen and was imaginary and made up. He conned the LGBT group into thinking he was going to do a survey for to evaluate their efforts and didn’t.

    I think your confusion was that you were thinking the “canvassing” occurred during the survey interview. It didn’t. “Treatment” was administered here — it really happened, and was paid for by the LGBT group from its own funding (it was trying to defeat prop 8 after all, so it was knocking on doors). But nobody measured anything about whether it worked or not. That was the fake part, paid for by his imaginary grants from imaginary funding sources.

    I hope that clarifies it a bit.

    • +1

      Also, as Ira Glass said in May, “The UC Berkeley and Stanford researchers wanted to replicate what LaCour did. They started their research two weeks ago …”.

      http://www.thisamericanlife.org/blog/2015/05/canvassers-study-in-episode-555-has-been-retracted

      That’s “the real work”, and it’s mentioned in the New York magazine article (linked in dl’s comment above): Broockman and Kalla “were recruited by Dave Fleischer, the head of the Leadership LAB, for a project based in Miami (Broockman and Kalla worked on it from California) that dealt with transgender equality.

        • The surprise contribution of the supposed study was not that attitudes about gay rights changed it was that attitudes changed when the intervention involved personal contact with someone from the impacted group with the special twist that the membership in the group was not obvious, but was revealed as part of the canvassing script. (You can see by that how complicated the idea is — it’s not the same thing as saying that racist attitudes will change by interacting with a person who is visibly African American. The fact that it is such a complicated idea is another reason to know that it would be very expensive to carry out) So transgender would potentially provide another opportunity to measure this effect.

  5. How many large-N social science studies are lies on the scale of LaCour’s and Green’s? All of them. We cannot (should not) trust any social science large-N studies that are based on privately collected data. It’s too easy (and the incentives are too high) not to assume that all such work is fraudulent.

    • I think that’s a bit extreme.

      While there are certainly pressures to publish in academia, most scholars a) are genuinely interested in the research they’re doing (so why fake results or cut corners when there are important implications for the work we’re doing?), and b) carry the research out ethically. Most scholars appreciate the importance their reputation plays in their careers–all it takes it one unethical act to ruin someone’s career. But people are people, and a small number of them will try to cheat to win. The funny thing is that the effort put into lying often exceeds the effort to do the research properly (as in the LaCour case). The bigger issue is the many arbitrary decisions that researchers must make on a daily basis that create problems for hypothesis testing–these are not unethical decisions but can influence published results.

      Anyway, there’s noting wrong with private data collection, provided it is made available so that others can check/replicate the work that was conducted. We shouldn’t eschew that type of data, but I agree that it should be scrutinized more carefully.

      • Can someone clarify what “privately collected data” means? I’m confused since @Todd’s comment seems to indicate that even privately collected data is available to others.

        I thought non-availability to others was a key characteristic of privately collected data?

        • I also have no idea what is meant by “privately collected.” Is this supposed to be as opposed to government funded/collected? LaCour could very easily have done a survey of attitudes. The reason his study would have been hard to do is that it involved random assignment to 4 groups and multiple follow ups.

        • I assumed @Ana meant any data that were collected with private research funds. For instance, Andrew could use his own research funds to conduct a study and have no obligation to make his newly collected data public (until a publication has resulted from the data). This scenario seems different than data collection funded by public monies or intended for public use (e.g., American National Election Study, General Social Survey, etc.).

          But perhaps I’m reading too much into this. As far as I know, there is no such thing as completely “private” data in the social sciences because authors are expected to share data from published research.

  6. http://www.poliscirumors.com/topic/what-if-the-ml-grants-were-bribes-fasten-your-seatbelt

    I published the following on the intellectual godhead forum PSR almost immediately after this scandal broke (~6 months ago). Maybe someone with better connections that myself can inquire in the right areas. Citations for the claims about philanthropic organizations will follow as they were not included in the PSR thread.

    ______________________
    As people turn their eyes to the Sr. members involved, we should ask a few more questions. I’ve seen it on PSR, “where there is smoke, there is fire.” This rabbit hole might get deeper.

    If the study had “slipped under the radar of everyone who was supposed to be paying attention to that sort of thing” what would there have been? $793,000 in grants for a project that says 20 minute conversations will turn the empathy key in a vast majority of those exposed.

    What if the money was donated to him by interested parties as a sort of bribe? And not just him, but all of those involved with him as a conduit as a signal on his CV for future studies. It’d be a powerful incentive for all involved when the message is righteous, no? What if a University continually suffering from funding cuts takes the left hand path and looks the other way while its upper faculty indulge?

    Those Foundations denied they gave him the money or had any records of him and then the grants were gone; “let’s talk finger pointing and data semantics.” How much of a follow up has there been on why in the world even a very dumb person would put that much money on his CV? It’s like Bart Simpson changing a D to an A; B is much safer, but nooo. Maybe they (one, some, many of the grant donors) did give him some money because they wanted his paper’s result for their organization. Who knows the editors of Science? Is that a popular rag? You think it is possible for many monies to converge in order to pull a paper through the system? It is not often social science is taken seriously by an actual science journal, but when it does it just so happens to be this one with a kid with $793,000 phantom dollars on his CV.

    One can look at the history of the philanthropic organizations in the US and there are many of them farm research out to universities with grant money, sometimes, for decade long policy studies of which the agenda was always predetermined. It would not surprise me if there were networks that cooperate on large research projects in this fashion because *I study politics* (although I’ll leave it up to you to prove me wrong).

    Back on topic. The paper’s result: What else is twenty minutes long? Tv shows without ads. Were any of the grant giving foundations involved in telecommunication programs at other universities? Do any of them have grants toward journalism? Cognitive studies? Language? Public Relations? Starbucks had a pretty popular policy recently where they said to confront people about controversial political beliefs, right?

    Matt Laouer also had a job lined up at Princeton. Bribery of this sort, I now posit, might be a method used by institutions to groom compliant mouthpieces for the future. Brian Williams is now a known embellisher and liar due, probably, to ego. I’d venture a guess and say the same is true here. Leverage is acquired when you know something about someone that they don’t want other people to know. Ask Sherlock what it looks like when wealthy people and institutions exercise their special forms of leverage within socio-political realms, or even personal psycho-sssssssocial realms. I heard he has some knowledge in those areas.

    Could this give a more sane motive to the one dubbed the ‘Compulsive Liar’? Or was his personality the reason for the convergence? A project by philanthropic organizations to justify and then promote media representation quotas about gay marriage that evaporates into thin air when the highly dubious paper to justify it ends up under a microscope? If David Magog had foreknowledge of the fraud and sat on it there is a means for a scheming, a means for glory *or* blackmail; leverage. Even if they are all found out, who has the will to press such a touchy PC issue to the core?

    If the study had “slipped under the radar of everyone who was supposed to be paying attention” what would there have been? $793,000 in grants for a project that says 20 minute conversations will turn the empathy key in a vast majority of those exposed. This has “can be used by advocacy groups and media companies to dedicate tv shows to this topic as a form of soft power” written all over it. The FORD Foundation, you say? Giving grants to this? Could they want to try and get such television programming on inside of…say, Russia? Max Kaiser said on his show, The Kaiser Report, on English RT that American executives had asked him to do what Abby Martin did; step down in very public way and denounce Russia for aggression in Crimea. Just something to keep in mind if you don’t study politics. All they need to do is ask the writers of shows popular to their target demographics to promote the cause because “it’s righteous *and* effective.”

    In conclusion, look elsewhere than the culprit, the Lone Gunman, if you will, and seek the answers to the vanishing .793 million dollars. Is scooby Doo still around to sniff out the mystery? If not ask David Russell Trust. I’m sure he learned something I didn’t when he was there.

    Robert F. Arnove, ed., “Philanthropy and Cultural Imperialism: The Foundations at Home and Abroad.” Indiana University Press, 1982.

    Joan Roelofs, “Foundations and Public Policy: The Mask of Pluralism.” State University of New York Press, 2003.

  7. Can you ask/convince/persuade Ira Glass to do a TAM show on unbelievable social science survey results that turn out phony, either intentionally (made up data) or inadvertently (p-value hacking, forking paths). Now _that_ would be a public service.

  8. wait a second…If randomization occurred and the experiment was implemented properly, and the outcomes are “so easy” to measure, then a independent party should measure the outcomes. LaCour obviously did not have the resources to deploy a 12,000 person panel survey, but Green does. The answer is there, it just needs to be measured.

    • It is a plausible candidate in general (e.g. that the movement out of the closet means that people are more likely to know that they know someone who is gay could over time lead to changes in attitudes), but having that contact be a brief scripted encounter with a stranger is not plausible.

  9. UCLA Policy 900 III. B.: Principal Investigator Eligibility:

    “Faculty advisors or mentors will typically be designated as Principal Investigators for graduate student fellowships awarded as grants. Graduate students, Postdoctoral Scholars and other trainees may not normally serve as a Principal Investigator, Co-PI or Multiple PI on extramurally sponsored contracts or grants.”

    http://www.adminpolicies.ucla.edu/app/Default.aspx?&id=900

    LaCour was not even eligible to receive grants! Both Vavreck and Green have received large grants and I’d bet they are well versed in university grant policy. It seems like there is no question they knew this and looked the other way.

  10. I wrote the following about this scandal last year in Taki’s Magazine:

    The scandal has led to many thumbsucker articles about the replication crisis in science and other weighty topics. But almost all of them are missing the point that even if this analysis had been honest, it still wouldn’t have been Science-with-a-capital-S as most people think of the word. Rather, it would have been lowly marketing research. This was never claimed to be a study of whether or not gay marriage was a good idea. Instead, it just purported to be research into how best to spin gay marriage to voters.

    And that’s emblematic of a trend in which the social sciences, having repeatedly failed to demonstrate the truth of the political dogmas espoused by most leftist social scientists, are slowly repositioning themselves as an arm of the marketing industry.

    It’s widely assumed by people on the left that the reason most social scientists vote like they do is because their findings support their leftist views, such as that race is only skin deep, that sex is just a social construct, and that social engineering works. Many people on the right, in contrast, suspect that social scientists come up with this data because they are leftist.

    But the truth is far more ironic: leftist social scientists seldom produce numbers supporting their leftist prejudices. … Hence, social scientists have been increasingly focused not on truth finding but on how better to manipulate the masses.

    http://takimag.com/article/ten_thousand_haven_monahans_steve_sailer/print#ixzz3ucTcJTdV

  11. ” I imagine that now, after several months has passed, he’d no longer think it was a wise use of resources to try to replicate this fake study.”
    It was either a fake study or a real study with a funding source who didn’t want to be acknowledged (such as the LA Gay and Lesbian Center misappropriating government funds provided for another program). My understanding is the canvassers were volunteers who were lobbying for equal marriage, the study was motivated by desire to know how effective such lobbying was. So only the survey firm would have been paid.
    The keenness which which both Green and Brookmann expressed a desire to repeat the study suggested they both actually believed the study was real.
    Any news about in which century the UCLA are going to complete their ethics investigation?

  12. I agree with Prof. Gelman on the whole Lacour thing, so that goes without saying. However, I really wonder about this statement that “Professor Green says he and his colleagues have read 900 papers. And they haven’t seen anything like this result—anyone who’s changed people’s views and it lasted like this . . . .” I guess I need to understand what “and it lasted like this,” means. I think we should be a lot more careful in understanding what it means for someone to change their mind. I mean Green’s statement suggests that (1) someone can affect another person’s decision, and (2) that the change is usually temporary (which no clear sense of the rate of decay). I think one of the sloppy features of this study is this naive notion that you study how people change their minds by simply going out and talking to people and collecting the results–independent of the test subjects social networks or personal circumstances. In fact, other studies (cited below) prove that you need to understand social contexts in order to understand decision making and behavior.

    In the Behavioral Economic literature as well as the Marketing literature there is a lot of success in finding instances where “framing” a particular idea or point will change people’s decisions. I mean in a very real sense the “framer” of a position is changing how a consumer of that message perceives the underlying facts. Alternatively in the Networks and Social Psychology literatures there are many studies that demonstrate the effect or friendship networks on drug-use and risky behavior in adolescents (http://www.tandfonline.com/doi/abs/10.1080/dep.7.1.21.37). Here is another study on the effects of friendship networks on obesity and other behaviors (http://www.sciencedirect.com/science/article/pii/S0378873309000495).

    For a really tongue-in-cheek example you could refer to the Jimmy Kimmel video asking people whether they prefer the Affordable Care Act or Obamacare (https://youtu.be/sx2scvIFGjE)–though I understand this is not a perfect example.

    As a caveat, I don’t believe that social networks are the entire basis for decision making. I am just saying that it is more complicated than just talking to people independent of some knowledge of their social reality.

    So I would say that there needs to be a lot more precision in the conversation about what it means to change peoples’ minds and how rare a phenomenon it is. For me this is again a case of different academic disciplines not talking to each other or learning from each other’s work.

Leave a Reply to Elin Cancel reply

Your email address will not be published. Required fields are marked *