A note from John Lott

The other day, I wrote:

It’s been nearly 20 years since the last time there was a high-profile report of a social science survey that turned out to be undocumented. I’m referring to the case of John Lott, who said he did a survey on gun use in 1997, but, in the words of Wikipedia, “was unable to produce the data, or any records showing that the survey had been undertaken.” Lott, like LaCour nearly two decades later, mounted an aggressive, if not particularly convincing, defense.

Lott disputes what is written on the Wikipedia page. Here’s what he wrote to me, first on his background:

You probably don’t care, but your commentary is quite wrong about my career and the survey. Since most of the points that you raise are dealt with in the post below, I will just mention that you have the trajectory of my career quite wrong. My politically incorrect work had basically ended my academic career in 2001. After having had positions at Wharton, University of Chicago, and Yale, I was unable to get an academic job in 2001 and spent 5 months being unemployed before ending up at a think tank AEI. If you want an example of what had happened you can see here. A similar story occurred at Yale where some US Senators complained about my research. My career actual improved after that, at least if you judge it by getting academic appointments. For a while universities didn’t want to touch someone who would get these types of complaints from high profile politicians. I later re-entered academia, though eventually I got tired of all the political correctness and left academia.

Regarding the disputed survey, Lott points here and writes:

Your article gives no indication that the survey was replicated nor do you explain why the tax records and those who participated in the survey were not of value to you. Your comparison to Michael LaCour is also quite disingenuous. Compare our academic work. As I understand it, LaCour’s data went to the heart of his claim. In my case we are talking about one paragraph in my book and the survey data was biased against the claim that I was making (see the link above).

I have to admit I never know what to make of it when someone describes me as “disingenuous,” which according to the dictionary, means “not candid or sincere, typically by pretending that one knows less about something than one really does.” I feel like responding, truly, that I was being candid and sincere! But of course once someone accuses you of being insincere, it won’t work to respond in that way. So I can’t really do anything with that one.

Anyway, Lott followed up with some specific responses to the Wikipedia entry:

The Wikipedia statement . . . is completely false (“was unable to produce the data, or any records showing that the survey had been undertaken”). You can contact tax law Professor Joe Olson who went through my tax records. There were also people who have come forward to state that they took the survey.

A number of academics and others have tried to correct the false claims on Wikipedia but they have continually been prevented from doing so, even on obviously false statements. Here are some posts that a computer science professor put up about his experience trying to correct the record at Wikipedia.

http://doubletap.cs.umd.edu/WikipediaStudy/namecalling.htm
http://doubletap.cs.umd.edu/WikipediaStudy/details.htm
http://doubletap.cs.umd.edu/WikipediaStudy/lambert.htm
http://doubletap.cs.umd.edu/WikipediaStudy/

I hope that you will correct the obviously false claim that I “was unable to produce the data, or any records showing that the survey had been undertaken.” Now possibly the people who wrote the Wikipedia post want to dismiss my tax records or the statements by those who say that they took the survey, but that is very different than them saying that I was unable to produce “any records.” As to the data, before the ruckus erupted over the data, I had already redone the survey and gotten similar results. There are statements from 10 academics who had contemporaneous knowledge of my hard disk crash where I lost the data for that and all my other projects and from academics who worked with me to replace the various data sets that were lost.

I don’t really have anything to add here. With LaCour there was a pile of raw data and also a collaborator, Don Green, who recommended to the journal that their joint paper be withdrawn. The Lott case happened two decades ago, there’s no data file and no collaborator, so any evidence is indirect. In any case, I thought it only fair to share Lott’s words on the topic.

17 thoughts on “A note from John Lott

  1. There are good reasons to believe that the court-of-public-opinion gets things wrong more often the [social-]scientific review process, namely that cognitive biases and shoddy reasoning are even more endemic to the former than the latter. And yet we know that the scientific review process gets things wrong pretty often. So I think it’s worth taking the “accused” seriously and being thoughtful and deliberative. Glad that you posted this.

  2. How come there wasn’t a formal investigation into Lott’s work by one of his University employers? The data-faking allegations seem serious enough that most Universities would constitute a formal committee.

  3. One of Mary Rosh’s John Lott’s many problems would be seem to be that he worked hard to undermine his own credibility. So why should anyone believe his claims that stretch credulity anyways…

  4. It’s worth taking a moment to follow some of the links Dr. Lott provides in his defense:

    http://doubletap.cs.umd.edu/WikipediaStudy/

    “However our close observation of Wikipedia points to the company’s willing participation in efforts to promote biased material into “fact.” The company’s business relationships give it high page rank in many search engines, so searches on many terms, disputed or not, naturally draw consumers to Wikipedia material. (Google in particular, a growing icon in politically left-leaning circles, gives high priority to Wikipedia entries.) When controversial topics are ‘frozen’ by Wikipedia editors, they are apparently done so in a form most beneficial to the left wing view, without disclaimer warning a well-intentioned researcher that he or she may be incorporating disputed or unsupported material. When journalists accept such material, whether innocently or by knowingly giving faint diligence to an obligation to get ‘outside’ authoritative sources, the quality of material presented on Wikipedia becomes inappropriately boosted in the eyes of the public. The net effect is a ‘bootstrapping’ process, in which the quality of material which tends to serve liberal political needs is artificially inflated and distributed.”

  5. Thank you for printing Dr. Lott’s reply. I think it’s fair. I’m agnostic about the whole deal, but I believe in people being “innocent until proven guilty.”

  6. News from John R. Lott (John W. is a mathematician with a NAS award):
    https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3756988

    A Simple Test for the Extent of Vote Fraud with Absentee Ballots in the 2020 Presidential Election: Georgia and Pennsylvania Data
    25 Pages Posted: 29 Dec 2020 Last revised: 3 Jan 2021

    John R. Lott
    US Department of Justice

    Date Written: December 21, 2020

    Abstract
    This study provides measures of vote fraud in the 2020 presidential election. It first compares Fulton county’s precincts that are adjacent to similar precincts in neighboring counties that had no allegations of fraud to isolate the impact of Fulton county’s vote-counting process (including potential fraud). In measuring the difference in President Trump’s vote share of the absentee ballots for these adjacent precincts, we account for the difference in his vote share of the in-person voting and the difference in registered voters’ demographics. The best estimate shows an unusual 7.81% drop in Trump’s percentage of the absentee ballots for Fulton County alone of 11,350 votes, or over 80% of Biden’s vote lead in Georgia. The same approach is applied to Allegheny County in Pennsylvania for both absentee and provisional ballots. The estimated number of fraudulent votes from those two sources is about 55,270 votes.

    Second, vote fraud can increase voter turnout rate. Increased fraud can take many forms: higher rates of filling out absentee ballots for people who hadn’t voted, dead people voting, ineligible people voting, or even payments to legally registered people for their votes. However, the increase might not be as large as the fraud if votes for opposing candidates are either lost, destroyed, or replaced with ballots filled out for the other candidate. The estimates here indicate that there were 70,000 to 79,000 “excess” votes in Georgia and Pennsylvania. Adding Arizona, Michigan, Nevada, and Wisconsin, the total increases to up to 289,000 excess votes.

    I didn’t see any data or detailed mathematical methods; his “coefficients” are not confinced to [-1;+1]; I can’t shake the feeling he picked his original mnatches carefully and produced a “regression to the mean” effect, but without his data, how can we tell?

    Georgia counted their (Nov. 3 2020) votes 3 times, one of them a complete manual tally of all ballots. If the effect he found exists, it wasn’t caused by fraud.

    • > It first compares Fulton county’s precincts that are adjacent to similar precincts in neighboring counties that had no allegations of fraud

      It’s like Cruz saying there was a legit challenge ’cause a lot of people think there was fraud.

      Clearly, if there weren’t allegations of fraud made about the vote in that good county, any deviation in patterns in another comparison country where there were allegations made, would validate the allegations of fraud.

Leave a Reply to Steve Sailer Cancel reply

Your email address will not be published. Required fields are marked *