(Further ranting on Public Choice’s terrible decision:) If Uri Geller submits a paper claiming to have photographs of the Loch Ness monster, you don’t just say, Hey, the photos look real, let’s go publish!

A couple days ago we talked about a crappy paper published by the journal Public Choice alleging massive election fraud in the 2020 election. As political scientists Andy Eggers and Justin Grimmer put it, the article in question was “riddled with errors and fails to accurately report the research that was conducted. At several points the paper misstates the regression specification being used (as we confirm through replicating Lott’s (2022) results), misreports regression coefficients (always in a direction favorable to his argument), and incorrectly reports variables used to produce a result.”

I analogized this decision of Public Choice to publish a terrible paper on elections that aligned with their political sympathies to the Lancet’s decision a few years ago to publish a terrible paper on gun control that aligned with their political sympathies.

At best, Public Choice sent the article to an incompetent or overloaded associate editor who sent it to three incompetent or overloaded reviewers who rubber stamped the article because it looked reasonable and is on a newsworthy topic. Or maybe they held the article to a lower standard for reasons of fomo or because they didn’t want to be censors—this is the same sort of reasoning that explains why Statistical Science published that Bible Code paper in 1994 and JPSP published that ESP paper in 2011: paradoxically, the fact that a paper makes an implausible claim can motivate journals to relax their usual standards. The worst thing would be if Public Choice went to an extra effort to avoid critical review of this paper because they really really wanted to publish it.

I’d say the same about Lancet publishing that gun control article. Some claims are just too juicy to check.

It’s not enough that a paper look superficially reasonable

Afterward I had an email exchange with someone who felt I’d been too harsh on the author and the journal, as the errors might not be obvious on reading the paper.

And I agree that the paper in question looks superficially reasonable. Of course it looks superficially reasonable; that’s how it was considered for publication in a legit journal in the first place. That gun control article looked superficially reasonable too. Hell, Andrew Wakefield’s antivax article looked superficially reasonable. That’s not enough!

Now, look, I can understand that Public Choice is mostly a theoretical journal, and maybe their editors don’t really have the skills to evaluate an empirical claim—all they can do is say that it has no obvious errors. In that case, the should either run it by reviewers such as Eggers and Grimmer, or, if that’s not possible, just don’t publish it! There are lots of journals in the world, and there’s no reason that your journal, which “studies the intersection between economics and political science,” needs to publish an empirical paper making claims above voting, claims which the journals editors are not competent to evaluate.

2 reasons to be suspicious

In any case, here are 2 immediate suspicious bits on the first page of the paper in question:

(1) Author’s name (he has a track record of bad research as well as lying).

(2) The claim in the abstract of “an average of 255,000 excess votes for Biden.”

The suspicious bits are not dispositive: (1) someone can lie and do bad research one day and do good research another day, (2) the 255,000 thing could be true despite nobody ever having presented any evidence on fraud of anything like that scale despite lots of high-profile claims.

Indeed, regarding point (2), the claims in Lott’s paper were already refuted in a paper published by political scientists Eggers and Grimmer!

Here’s what Eggers and Grimmer wrote at the time:

Another focus of the Trump team’s accusations was the processing of absentee ballots in key states that Biden narrowly won. Among other claims, they alleged that Fulton County, GA, and Allegheny County, PA, were major centers of voter fraud in the 2020 election. Most of these allegations relied upon hearsay affidavits or debunked videos purportedly showing voters stuffing ballots. But in a paper posted in late December 2020, Lott (20) claims to provide statistical evidence that irregularities in the absentee vote counting procedure in Fulton County and Allegheny County suppressed votes for Trump and bolstered Biden’s vote count. Lott examined precincts along the border of Fulton and Allegheny Counties and argued that he detected anomalous support for Biden in his absentee ballot share relative to his in-person share of ballots in Fulton and Allegheny Counties. Lott’s paper received immediate and widespread attention. Peter Navarro, Assistant to the President and Director of the Office of Trade and Manufacturing Policy, touted the claim as solid evidence of fraud. President Trump tweeted out a link to the paper.

Lott’s claims, however, do not withstand scrutiny. Using Lott’s own data, we show in SI Appendix, section G that the specification he uses to analyze absentee voting patterns produces different conclusions depending on the entirely arbitrary order in which counties are entered in the dataset. Briefly, Lott posits that, if absentee ballots were correctly handled, the difference in Trump support across a boundary that separates a Democratic county from a Republican county should be similar to the difference in Trump support across a boundary that separates one Republican county from another. But Lott’s conclusion depends entirely on the order in which the differences are computed for the Republican–Republican pairs. The conclusion is reversed when an alternative and equally justified order is used.

To achieve Lott’s objective of comparing voting patterns across county boundaries, we reanalyze Lott’s data using a more standard specification that does not suffer from these problems. . . .

Lott (20) also claims to show that 2020 turnout rates were higher than one would otherwise expect in a set of counties where Republicans have alleged that fraud took place. Lott argues that there was an “unexplained increase in voter turnout” ref. 20, p.13 in the key counties of between 1.26 and 2.42%, which Lott says is equivalent to 150,000 to 289,000 votes in those states. Lott concludes that this is evidence consistent with fraud.

To determine whether the “suspicious” counties had higher turnout, Lott checks whether turnout in the 2020 election was higher than would be expected (given previous turnout, political leaning, and local demographics) in counties where, according to Republican lawsuits filed after the election, fraud may have taken place. Lott identifies 19 counties across six swing states where Republicans made fraud allegations.†† He then compares turnout in these counties to turnout in other counties in the same six states plus all counties in three other swing states (Florida, Ohio, and North Carolina). He argues that, if turnout is higher in these counties than would be expected given covariates, it would be evidence of fraud.
As we explain in SI Appendix, section H we dispute the premise of this analysis: Turnout varies across counties for many reasons, and it is unreasonable to ascribe a small unexplained difference to fraud. As it happens, Lott’s finding is not robust to sensible departures from his chosen specification, so it is not necessary to dispute the premise.
Our analysis of county-level voting data for 2016 and 2020‡‡ indicates that Lott’s conclusions are driven by the inclusion of states that have lower turnout increases and no suspicious counties—namely Florida, North Carolina, and Ohio. . . .

OK, sure, it’s quite possible the Public Choice editors didn’t see that paper by Eggers and Grimmer. The point is that the editors should already have had good reasons to be suspicious. If Uri Geller submits a paper claiming to have photographs of the Loch Ness monster, you don’t just say, Hey, the photos look real, let’s go publish!

Op-eds in the guise of research articles

In this case I’m guessing that the Public Choice editors, like the Lancet editors before them, either knowingly published bad work, or they saw work that fit their political tastes and they avoided giving it serious scrutiny, or they just wandered so far away from their area of expertise that they’re no longer acting as a scholarly journal. At this point, they’re just publishing op-eds. Unfortunately, these are op-eds in the guise of news articles. And this has two unfortunate consequences: first, it inappropriately gives credence to junk claims; second, it appropriately lowers the reputation of the journal.

Look. We should put this in the proper perspective. Running political propaganda in a scientific journal is bad, but there’s a lot worse bad out there. Take Walter Duranty, for example: he pushed political propaganda in the New York Times! That’s a lot worse than doing it in Public Choice, right? I guess that as an academic researcher there’s something that particularly bugs me when a scholarly journal does it.

Again, I thank Eggers and Grimmer for doing the unpleasant work of tracking down all those problems with the paper.

3 thoughts on “(Further ranting on Public Choice’s terrible decision:) If Uri Geller submits a paper claiming to have photographs of the Loch Ness monster, you don’t just say, Hey, the photos look real, let’s go publish!

  1. I’d say the standard for publishing allegations of massive voter fraud, written by a partisan for one party against an opposing party, in the context of ubiquitous claims that the election was stolen, claims that have systematically failed to prove the case in court, should necessarily be a high one.

    And there’s this:

    > No one claims that Republicans systematically encouraged people to submit affidavits about vote fraud, specifically in places where Democratic voter turnout had been heaviest.

    C’mon. Perhaps someone could claim that’s true on a purely technical reading. But it’s absurdly misleading in the context of what took place in this election.

    > Afterward I had an email exchange with someone who felt I’d been too harsh on the author and the journal, as the errors might not be obvious on reading the paper.

    If the errors weren’t obvious, the context made the potential for bias obvious. I think a rigorous critique is warranted.

  2. I want to make just two short comments about the validity of the published research.
    First, we should abandon the concept of “published research”. There are just “written pieces that are made public”. Whether there’s a logo in the corner, says almost nothing about the quality or validity of the claim.
    Second, the quality of the research has to be judged by the peer review it goes through (and the (hopefully positive) comments it receives). Since we scientists agreed that peer review ought to be anonymous (a highly dubious assumption), we should be satisfied with the indirect evidence of the peer review process, including the historical track record of the quality of referee recruitment decisions made by the journal editor, if any (since we chose to make that fact non-public information as well). Our only hope is PubPeer and the likes which continue peer review process beyond the publicizing date ( or “date when the article is made public”). For non-journal publicizing venues we have PreReview.org and other similar services. Blog ramblings are fine, but if one wants to make a difference, one should abandon the anonymity and go out to one of these venues and leave substantial feedback for future generations to discover.

    Forcing the journal to retract the paper is probably no longer a go-to soluition, since there are basically no repercussions for editorial arrogance.

Leave a Reply

Your email address will not be published. Required fields are marked *