John Lott as possible template for future career of “Bruno” Lacour

The recent story about the retracted paper on political persuasion reminded me of the last time that a politically loaded survey was discredited because the researcher couldn’t come up with the data.

I’m referring to John Lott, the “economist, political commentator, and gun rights advocate” (in the words of Wikipedia) who is perhaps more well known on the internet by the name of Mary Rosh, an alter ego he created to respond to negative comments (among other things, Lott used the Rosh handle to refer to himself as “the best professor I ever had”).

Again from Wikipedia:

Lott claimed to have undertaken a national survey of 2,424 respondents in 1997, the results of which were the source for claims he had made beginning in 1997. However, in 2000 Lott was unable to produce the data, or any records showing that the survey had been undertaken. He said the 1997 hard drive crash that had affected several projects with co-authors had destroyed his survey data set, the original tally sheets had been abandoned with other personal property in his move from Chicago to Yale, and he could not recall the names of any of the students who he said had worked on it. . . .

On the other hand, Rosh Lott has continued to insist that the survey actually happened. So he shares that with Michael LaCour, the coauthor of the recently retracted political science paper.

I have nothing particularly new to say about either case, but I was thinking that some enterprising reporter might call up Lott and see what he thinks about all this.

Also, Lott’s career offers some clues as to what might happen next to LaCour. Lott’s academic career dissipated and now he seems to spend his time running an organization called the Crime Prevention Research Center which is staffed by conservative scholars, so I guess he pays the bills by raising funds for this group.

One could imagine LaCour doing something similar—but he got caught with data problems before receiving his UCLA social science PhD, so his academic credentials aren’t so strong. But, speaking more generally, given that it appears that respected scholars (and, I suppose, funders, but I can’t be so sure of that as I don’t see a list of funders on the website) are willing to work with Lott, despite the credibility questions surrounding his research, I suppose that the same could occur with LaCour. Perhaps, like Lott, he has the right mixture of ability, brazenness, and political commitment to have a successful career in advocacy.

The above might all seem like unseemly speculation—and maybe it is—but this sort of thing is important. Social science isn’t just about the research (or, in this case, the false claims masquerading as research); it’s also about the social and political networks that promote the work.

57 thoughts on “John Lott as possible template for future career of “Bruno” Lacour

  1. Yes, the first thing I thought of when the ol’ hard drive crash excuse was noted by LaCour was Lott. Everything about the Lott affair was completely shameless- though I suspect LaCour being so junior will have much greater consequences for his actions than Lott ever will.

    Please be precise, however, about the status of the paper. Science (the journal) has issued an editorial expression of concern dated May 20th, I believe. The paper has not been formally retracted, though the last author (your colleague I believe?) has requested a retraction from the journal.

    • Adding, the nature of what Lott was pushing in his bogus study was attractive to an audience regardless of its truth and remains attractive. I would say that LaCour’s results were similarly attractive from one point of view, but I suspect the same audience that might have been enamored with those results is also the same audience that is also prepared to renounce him completely. In Lott’s case, we know that this is not true. I leave it to viewers to speculate why that might be.

  2. The data have tendency to disappear in case of controversy (think IRS disc crash and Clinton emails). If normal excuse for not sharing the data is intellectual property concerns or IRB requirements, could we at least insist that the raw data were locked in the safe place under control of someone else other than the researcher. And then that they would be released say in 10 years time. Of course, after 10 years any interest in a specific research topic might disappear (unless you are a Gregor Mendel and establishing a completely new branch of science), but maybe people will be reluctant to put a time bomb under their careers anyways.

  3. Andrew,

    I know the problems with “he said/she said” journalism, but considering that this is all so new and we haven’t actually heard a confession of anything yet, just to be fair we should reference Lacour’s note to RetractionWatch a couple hours ago (below with link, including quote in next sentence). Note too, that the only thing anyone has even claimed that Lacour admitted (as far as I can tell) is “some of the details of the data collection were falsely described in the published report.” I’m gonna give him a chance to say something interesting – who knows, maybe this whole thing was him going reverse-Sokal and trying to show how easy it is to publish fraudulent quantitative analyses (ok, probably not, but still – let’s give the kid until next week before we, like, totally rip him to shreds*).

    http://retractionwatch.com/2015/05/20/author-retracts-study-of-changing-minds-on-same-sex-marriage-after-colleague-admits-data-were-faked/#more-28479

    “I will supply a definitive response (including a timeline of events) on or before May 29, 2015. I appreciate your patience, as I gather evidence and relevant information, since I only became aware of the allegations about my work on the evening of May 19, 2015, when the not peer-reviewed comments in “Irregularities in LaCour (2014),” were posted publicly online.

    I must note, however, that despite what many have printed, Science has not published a retraction of my article with Professor Green. I sent a statement to Science Editor McNutt this evening, providing information as to why I stand by the findings in LaCour & Green (2014). I’ve requested that if Science editor McNutt publishes Professor’s Green’s retraction request, she publish my statement with it.”

    *Note: if he does claim he’s been going reverse-Sokal, he better cite me! We don’t care for the plaigs ’round here.

    • Jrc:

      Yes, various possibilities remain, including that LaCour might follow Lott’s example and simply insist that the data are real despite the lack of any documentary evidence. We shall see.

      • He said he did it on Qualtrics and therefore shouldn’t those data be on their server. Last time I used their service everything was on their server.

  4. I hope that if LaCour faked the data (as seems likely) that he’s wise enough to choose the Stapel route rather than the Lott route. Being a hack on TV and getting abused by Piers Morgan and shut down by retired colonels is by far the more embarrassing fate.

  5. I get the sense that we are taking the hard line on LaCour but treating Donald Green with kid gloves?

    Is Green really the innocent dupe he is playing?

  6. An aside: Is it common for grad students to write papers with a Professor at another institution as co-author where the student’s own advisor isn’t involved in the study at all?

    • Green offers a summer seminar for PhD students across the country to work on survey methodology, which LaCour attended. So while it may not be common for profs generally to copublish with students outside the department, this case is a little unusual.

    • What’s the normal protocol about retaining the raw, non-anonymized data that has sensitive personal information about respondents?

      I’m trying to judge if LaCour’s reasons for destroying the data are reasonable. On the one hand, I can imagine the uproar that would result from “900 respondent’s personal data stolen from UCLA lab survey sheets”.

      OTOH, wouldn’t depositing the raw / sensitive data in some third party secure vault be the better solution to this problem of balancing data-safety against verifiability?

      What’s the typical solution?

      • I don’t know… it is odd that it happened so fast, it is odd that he collected it on public-ish servers in the first place if he was worried about confidentiality that much, but it is not totally crazy that he would have destroyed the link between identifying household information and sensitive survey responses. I realized how dangerous that could be when I looked at some responses to a raw health survey – people could be hurt by that information getting out. Hurt bad.

        That said, it is totally unclear to me exactly what else was destroyed, exactly what is left, and exactly why his co-author didn’t know the state of the data. This should’ve all been part of a clear, stated data-handling policy.

        But honestly – if someone wrote to me and said “I want every piece of the raw data on your abortion survey*” I’d tell them they could have most of it, and if they needed the rest I’d have to go all Restricted Data on them and sit them in a basement with no internet, no cell phone, and they could only export tables. That seems like a lot of work, though.

        *note: I never did an abortion study, that was just an example.

        • Well, we do know what *excuse* his co-author (Don Green of Columbia) gave for not knowing the state of the data.

          He claims since he did not have approval from Columbia’s IRB he scrupulously avoided looking at the raw data. I don’t deal with IRBs but it sounds like a weak, fishy excuse to me.

          Did Don Green even try to get his IRBs approval? Would they have said no had he gone to them? This is with the knowledge that LaCour’s IRB had already signed off on the study.

      • You never store the identifying information with the data. If you have physical surveys you have the personal data it on a cover sheet you have the id number on the cover and on the survey and rip the coversheet off and store it in a secure location. I’ve seen places with safes, but depending on the risks it could also be an encrypted file. The feds have a set of guidelines you have to follow to set up a secure research office (I happen to have been working on shepherding a security plan for filing with NCES through the combination of the building people and the it people at our place). The personal data should never be entered into the same file or table as the rest of the data. If the data ever existed (a big if at this point) it sounds like it was collected by someone with no training in survey research design or ethics at all.

    • Jrc:

      My favorite bit in LaCour’s document was this:

      Instead, I raffled raffled Apple computers, tablets, and iPods to survey respondents as incentives. I located some of the receipts, Link here. Some of the raffle prizes were purchased for a previous experiment I conducted.

      The receipts are for a few computers and peripherals he bought over a 2-year period. But what’s really funny is the last bit. He did a “previous experiment” in which he promised to raffle off electronics, and then he just kept the prizes himself to use in experiment #2?? That ain’t cool.

      And what’s with this R output at 7 decimal places??? I’m having less and less respect for this guy.

      • Did you have to reassess your respect for Don Green too? What’s your impression of his role in all this?

        Some of his excuses seem rather flimsy too. e.g. He is claiming he never looked at the raw data because he didn’t have Columbia IRB approval.

        • The question is why did he allow it to be submitted with his name as first author without even being IRB approved for the project. Now, I have not always necessarily though that through and we don’t know if Green was listed on the UCLA IRB records for the study at all. In fact I don’t think we have heard for sure that the IRB approved this study. I doubt it. It would not be classed as an exempt study since it involves deception and trying to change people’s opinions on issues that might have social consequences (what if a participant was a member of a conservative religious group and this magic change led to them being expelled; what if a participant was a closeted gay person and the meeting with the canvaser caused them to come out). Part of what is suspicious about the whole thing is the implication that this study had IRB approval but no real evidence that was the case.

        • The truth is rather bizzare. The studies by LaCour and Green were done in May 2013. No IRB approval was taken. Neither from UCLA nor Columbia.

          Don Green & LaCour got their sensational Science Paper out in December 2014 where they still did not have IRB approval but lied flat out that they did.

          Fait accompli, LaCour did apply to UCLA’s IRB for approval. UCLA’s IRB informs that they cannot review a completed study in hindsight.

          What amazes me is that in spite of knowing that LaCour’s work was not approved & that he had lied to Science that it was, UCLA did nothing at all.

          Seriously, the rot in the system runs way deeper than LaCour. Don Green, UCLA, the IRB everyone has a lot of answering to do.

        • Well that explains something which is how it could possibly have gotten past the IRB.
          Why would Science accept a paper without evidence of IRB approval? That’s also standard in many journals.

          Does graduate training in political science simply not include anything about research ethics or the IRB? My undergraduates can’t graduate without having passed the student training and it’s a topic in at least 3 required courses.

          I mean … I guess it is better for Green in a way that he didn’t put any humans at risk, but I would guess OHRP will be doing some auditing at both Columbia and UCLA.

        • Well, Science accepted it because LaCour & Green’s submission said that they *had* UCLA IRB approval. i.e. Straight lies.

      • I know right! He literally funded his study via gambling revenues obtained from his own subjects. Some small part of me is like “wow – good hustle grad student” but mostly I’m like “I cannot believe you don’t understand that you can’t exploit your own subjects for money – and so much so that you just out and say it and it is the one thing you confess to that you DON’T apologize for.”

        On the other hand – and forgive me, it was late and Friday (by the way, the dude learned something from journalists about story timing, ¿no?) – what was this whole “wrong variable” thing? Has anyone read closely enough? Does it matter? I got too lost in the terrible formatting.

        The whole thing leaves a pretty bad taste in my mouth, and I am not sure anyone comes out of this all that well. It just seems like it could’ve been handled a lot better. For instance, I actually think this paragraph by Lacour is pretty fair (not necessarily in spirit, but in words):

        “I note that Broockman et al. (2015)’s decision to not present the lead author with the critique directly, by-pass the peer-review process, privately investigate data collection activities without knowledge or consent of the author, demand condential identifying information from respondents in a study without grounds or standing to do so, publicize unsubstantiated allegations and hearsay prior to a formal investigation, is unprecedented, unethical, and anomalous in the relevant literature.”

        Some my take: We need socio-institutional norms about how to go about replication: how and when to contact people; how long to wait for responses; how and when to make results or accusations public; how to turn accusations into positive contributions to the field (like super-replicate-and-extend, pitched that way)… No one seems to know how to interact with each other. I include myself in that – best I’ve come up with is “be cool about it,” which is not a completely satisfying heuristic.

        • For one I was struck by what an hurry Brockman et al. were in. This went from “we are running a replication” to “let us ask for a retraction” in just about 2 weeks.

        • This timeline is given by Broockman himself. Exactly 11 days from start of pilot study till the retraction.

          May 6, 2015. Broockman and Kalla launch a pilot of the extension study.

          May 15, 2015. Our initial questions about the dataset arose

          May 17, 2015. Broockman, Kalla, and Aronow prepare and send this report and replication code to Green. Green reads and agrees a retraction is in order

        • This reading of the timeline is very different from the one given by Broockman. Also your earlier comment about there not being IRB approval at all doesn’t seem right. Furthermore, there is no question that data are not to be destroyed, merely identifiers of data. LaCour is parsing in a way that strains or explodes credibility. I would say that Broockman’s one misstep was resorting to the anonymous message board, but if you consider the apparent pressure not to rock the boat, that pressure is quite troubling.

        • @Pinko

          “…but if you consider the apparent pressure not to rock the boat, that pressure is quite troubling.”

          +1

        • Rahul,
          Sorry I was taking earlier discussion by Green citing IRB, which now I read was fabrication of LaCour. What I cannot possibly imagine is how LaCour could have an even de facto advisor at UCLA who would not realize that a Science paper was published on a study without IRB approval. I am also deeply disturbed by how- if UCLA IRB discovered this in January, how this could have been squelched by LaCour for months. My reading of the timeline in the NY mag piece and the Stanford doc were the foundations of skepticism had been building for some time beyond the dates given.

          UCLA detection of lack of IRB approval makes almost anything moot. The paper would have been retracted on those grounds, then on the fraudulent funding, then on the incorrect details of the study, then on the data irregularities, then on the absence of underlying data. What I will say is that I don’t think this episode has wasted the time of interested onlookers as some have suggested.

        • I finally get it. There was approval but approval on fraudulent grounds in that it was after the alleged fact. I know that hindsight makes things look like they should be obvious but with such abject shamelessness it is like people are continually fooled.

        • The sad part is that not much attention is being paid to the lapses / negligence / malfeasance on the part of Don Green, UCLA’s IRB board, Jeffrey Lewis (Poli. Sci. department chair) & Lynn Vavreck (LaCour’s advisor).

          This fraud could have been mitigated, had any of the involved people been conscientious about their responsibilities.

          But I think bloggers etc. are just focusing on LaCour, probably because he is so egregiously bad, and ignoring the other guilty parties.

          It is far easier to blame it on one rouge player, an outlier, than to acknowledge that the system has deep flaws. I dread to speculate how much of published Poli. Sci. Literature is similar concocted crap.

        • @Pinko
          There was NOT approval.
          A letter like that is the IRB punting it into the Provost or President’s office for further action.
          The “non serious” is based on there being no evidence of harm to participants. Which given that there likely were no participants is probably fair.
          Interestingly they did not require him to destroy his data or withdraw his publication which would be standard practice at some places.

        • I don’t agree. What they did was go to the senior, presumed corresponding author (I note that the article does not denote a corresponding author) as was correct. The presumed corresponding author dealt with the first author. The first author is said to have claimed the data no longer exist. Therefore the corresponding author chose to submit a retraction request. If any criticism is to be leveled it is at Green. However, this seems to make no sense. If data no longer exist (or never existed), then questions as to the study’s provenance cannot possibly be addressed. This undermines confidence in the study entirely and goes against academic norms. Green is completely within his rights to ask for the study to be retracted. It is possible that he could have merely asked for his name to be removed from the study, but then Science has editorial discretion towards its readers to make a decision. In no way is this “unfair” to LaCour who has already admitted to falsification present in the study- i.e. the nature of payments to respondents and sources of funding. Those details are enough to cause a paper to be retracted as they are deliberate falsifications.

          You will note that the BKA did not ask for a retraction, they made public their analyses and the reasoning behind those analyses. Green, an author on the paper asked for the retraction. BKA were entirely professional in their behavior. Given that their interactions with LaCour led them to believe he might have not been honest with them, it appears they went to the senior author. They should be praised for their forthright actions.

        • You don’t think they should’ve sent their analysis to the guy they accused of faking the data before they put it out on the internet?

          For the record – I’m not arguing against the paper being retracted, nor am I arguing that Lacour is a victim here. I’m just arguing that, if you are producing a statistical analysis to argue someone faked their data, you could provide the analysis to that person before you provided it to the press. To me, it ends up looking more like a witch hunt than a trial by jury, even if it turns out Lacour actually was a witch.

          That said, I’m sympathetic to the authors too. I actually do know what it feels like to be a graduate student who is pretty sure some published work is dead wrong (not fraudulent, just dead wrong for clear reasons). And no one can really tell you how to go about dealing with it. It is really awkward. That is why I think we need some norms (and I mean norms, not rules) – so when a student comes across something like this, we can tell them “OK, first thing you should do is… then you should… then you should… and then you should… and maybe then you can think about going public.” Like I said above, “be cool about it” isn’t quite concrete enough.

        • I think my feeling is that they became concerned about their interactions with LaCour, so going to the other author of the paper is in essence part of what you suggest- they did it through Green. I think the way they laid out their concerns in the document is actually quite reasonable for a preprint depository like arxiv. It seems that when Green told them the data no longer existed, at this point there is absolutely no reason not to post the concerns. I will go back and look at the timeline, but it seems that there was some indication that the data no longer existed or were beyond the earthly bounds as it were. It appears as though LaCour is presenting the thinnest veneer of a response. Take a look at LaCour’s claims about the feeling thermometer analysis.

        • Pinko:

          That makes sense to me. Once you realize you’re in an interaction with someone who you believe will lie, cheat, and steal, it makes sense to back away and reduce all contact.

        • I don’t think he meant he was having participants buy lottery tickets, he meant that he put all the people who participated into a lottery for an iPad or whatever. This is pretty common practice. I’d just be wondering why he didn’t give them away and if that means whatever study that was connected to also did not exist.

        • Re-reading, I see you are probably right. So the LA GLBT paid for all the data collection that actually did take place using their grant from the Haas foundation? Ugh, Brian Nosek was totally right about the amount of time wasted on this by all of us….

      • Raw data in the form of temperature records were requested of one of the prime promoters of AGW, the Climate Research Unit of the University of East Anglia. The response was the original data was lost in a move and all they had was their adjusted data. Another climatologist, when asked for his data, said to another researcher someting like, “Why should I give my data to xxx. He’ll just use it to try to prove me [my concludions] wrong.” Further, they had no record of how their adjustments were made or what they (the delta) actually were, so one could not reverse engineer the adjusted figures to derive the original.

        Those questioning the validity of data used to show to man-made climate change have had to resort to using the Freedom of Information Act to get the data, and have had limited success. This is one of the reasons why there is so mych acrimony in the debate.

        After the problem arose (end of the 1990s), much greater care was made of the raw data, and it has been disseminated with little issue. Interestingly, that coincides with the start of the ‘pause’, the last 15-18 years (depending on data set) during which there has been no statistical warming. Make of that what you will.

  7. Wow. He’s hanging his defense on a very questionable reading of “or”- [data are to be anonymized or identifiers destroyed]. No reasonable reading takes data –> destroyed. Andrew predicted it- he’s going full Lott.

    • Richard:

      Huh? Science, social and otherwise, has long been understood to be a social process. This is not a controversial statement. Social and political networks affect what topics people work on, what is funded, etc.

  8. > “However, in 2000 Lott was unable to produce the data, or any records showing that the survey had been undertaken. He said the 1997 hard drive crash that had affected several projects with co-authors had destroyed his survey data set, the original tally sheets had been abandoned with other personal property in his move from Chicago to Yale, and he could not recall the names of any of the students who he said had worked on it. . . .”

    So is backing up your data – making multiple copies of it and storing them in separate locations so that nothing short of a once-in-a-millenium catastrophe will take out all of them – a practice specific to people working in the physical sciences?* Speaking from my own experience, when we’d go out for a multi-week field campaign the last thing we’d do at the end of each day would be to make multiple copies of the data we’d collected. It went without saying that the consequences of a single-point failure would be disastrous. “I lost it.” stretches the limit of credibility.

    * Rhetorical question

  9. Do not leave disgraced historian Michael Bellisle out of this debate. He was not yet computer savvy in the mid-1990s, do he said, and wrote his raw data on yellow legal notepads. After he published his research the notes were destroyed by water flooding his office. That was a shame, since some of that data will never be replicated, the sources of them having been destroyed in the 1906 San Francisco earthquake.

    If you are not familiar with the story, it’s fascinating reading. The fraud was uncovered by an amateur historian and a computer programmer enrolled in a history program. If left to professional historians and the formal methodolgy of questioning peer-reviewed research, the fraud may never have been uncovered.

Leave a Reply

Your email address will not be published. Required fields are marked *