No, the polls did not predict “a Republican tsunami” in 2022.

I subscribe to the London Review of Books and read most of every issue because they have lots of interesting articles, often with unusual perspectives. But when it comes to politics, they often thoughtlessly go with conventional narratives and I guess there’s nobody there to fact check.

We’ve talked about this before (“But viewed in retrospect, it is clear that it has been quite predictable” and Nooooooooooooo!).

Here’s another case, from writer/editor Adam Shatz, talking about the 2022 U.S. general election:

The polls, unreliable as ever (this was one thing Trump got right), told us that high inflation and anxiety about crime were going to provoke a Republican tsunami.

But that’s wrong! First, the polls are generally not “unreliable,” and it’s particularly wrong to cite Trump here (more on this below). Second, no, the polls in 2022 did not “tell us” there would be “a Republican tsunami.” The pundits in 2022 were off, but the polls did just fine.

For example, here are the poll-based forecasts for the Senate and House of Representatives from the Economist magazine (they should have some copies floating around in the LRB offices, right):

The actual outcomes (Republicans ended up with 49 seats in the Senate and 222 in the House) are close to the point forecasts and well within the forecast intervals.

The polls are not magic—they won’t tell you who will win every close race, and they can be off by a lot on occasion, but (a) they’re pretty good on aggregate, and (b) no, they did not predict a Republican tsunami.

In 2016 and 2022, the Republicans did about 2 percentage points better than estimated by the polls. Indeed it’s a sign of how well the polls did historically that a 2-percentage-point discrepancy was considered large.

The standard narrative

As so often happens, a writer was regurgitating an existing narrative.

Journalists remember the lessons of 2016 and 2020 as “don’t trust the polls,” even though the actual lesson was that the polls were pretty damn good. In the 2016 primaries, the polls were right about Trump having a bit lead over his opponents; it was the pundits who were wrong. In the general election, Trump did something like 2 percentage points better than the polls predicted. 2 percentage points isn’t nothing but it’s pretty small. Back in 2022 the narrative jumped the shark, when people criticized the polls for saying something they didn’t actually say.

The problem’s not just with journalists. Academics too can pick up misconceptions from the news media and then use these factual errors to make ridiculous arguments. For example, after 2020, a University of Chicago economist hyped a “renegade pollster” for its great “accuracy” in 2020, even though that pollster’s forecast were less accurate than . . . the Economist’s forecast! This foolish professor just took the standard narrative and ran with it. This one was funnier than the example given above, though, because instead of getting it wrong in the London Review of Books, he did it on a website with articles such as “Pork-Stuffed Bill About To Pass Senate Enables Splicing Aborted Babies With Animals” and “Disney’s ‘Cruella’ Tells Girls To Prioritize Vengeance Over Love.” I’m still thinking about how the clever headline writer juxtaposed “pork” with animal-splicing, given me the indelible image of fetuses that are half-human, half-pig.

As a poll analyst myself, I agree completely that polls have errors and need to be treated carefully, which is why I write posts such as “Don’t kid yourself. The polls messed up . . .” and articles on “Failure and success in political polling and election forecasting.” But imperfect is not the same thing as terrible, and actually they did very well in 2022. If you’re gonna write about the topic at all, get the facts down, please.

P.S. I sent the LRB a letter on this, and they published it. For the magazine, I kept it brief:

Adam Shatz, writing about November’s US midterm elections, remarks that ‘the polls, unreliable as ever (this was one thing Trump got right), told us that high inflation and anxiety about crime were going to provoke a Republican tsunami’ (LRB, 1 December). Actually, the polls in 2022 were accurate and did not predict a ‘red wave’ of any kind. For example, the Economist’s poll-based forecast predicted a possible range for the Republicans of 46 to 55 seats in the Senate and 208-244 in the House of Representatives, with average forecasts of 50.8 and 224.5; in other words, near ties in both Houses of Congress. The Republicans ended up with 49 Senate seats and 222 in the House, well within the forecast ranges.

Journalists seem to have taken the lesson of 2016 and 2020 to be ‘don’t trust the polls,’ despite the evidence. In the 2016 primaries, the polls were right about Trump having a big lead over his opponents; it was the pundits who were wrong. In the general elections of 2016 and 2020, Trump managed something like two percentage points better than the polls predicted, which isn’t nothing, but it’s a sign of how well the polls have done historically that a two-percentage-point discrepancy was considered large.

6 thoughts on “No, the polls did not predict “a Republican tsunami” in 2022.

  1. The weather report gives us all a daily look at the utility and hazards of predictions. Likewise, there are sports odds and stock market predictions published everyday. There are very few articles that report on the errors of the weatherman, the sports book, or the market analysts. I think the difference is that there is a belief that political predictions are not just forecasts but actually influence the outcome. It is believed that if someone reports that Jones will win the local dog catcher spot, this helps Jones’ ambitions. This produces the eagerness to discover and discuss the errors of political predictions. Poor Nate Silver can’t catch a break.

    • Oncodoc,

      I wouldn’t worry about Nate, he seems to be doing ok and he does great work. He’s a bit too unwilling to question his own methods, at least in public, but I guess that’s the norm among analysts (who don’t want to give away their secret sauce) and journalists (who benefit from an air of authority).

  2. It may be true that the polls did reasonably well in 2022, but I’d be cautious about relying on the Economist’s model to support that claim. Their model can’t possibly be a simple unweighted average of all polls. Most individual House districts never had any publicly released polls at all, and those that did often had just one or two. Polling was more intense on the Senate side, but there were still a lot of races without very much of it.

    There were of course quite a few national polls asking people whether they intended to vote for a generic Democrat or a generic Republican. But translating those polls into a forecast of House or Senate seats is no straightforward matter. It’s not just about the votes–it’s about how they’re distributed. Then there’s the complication of states with no Senate race and House districts where only one party ran a candidate. Whatever exactly the Economist was doing, it must have been partly relying on judgments that may have nudged the forecast in the right direction.

    I’d be interested to see how the national generic polls compared to the overall House popular vote. First, you’d need to correct the actual House vote to account for uncontested districts. There’s more than one way you could do that, but just about any reasonable method would probably work as well as any other. I suspect the corrected actual House vote was at a level that in fact usually would have produced something like 240 Republican seats. Democrats clearly benefited from a more favorable vote distribution than they usually have–which I suspect is because the abortion issue helped them the most in exactly the swing districts where they needed the most help.

    It’s always amusing to remember that the biggest polling miss of my lifetime occurred in 1996. No one remembers it as a big miss, because the polls had Bill Clinton winning easily, and he did win easily. But the difference between a 15-point margin and the actual 8.5-point margin is more substantial than anything we’ve seen lately at the presidential level.

  3. Andrew wrote, “in the 2016 primaries, the polls were right about Trump having a bit lead over his opponents; it was the pundits who were wrong.”

    Should “bit” be “big” or should “bit” be “a bit of a”?

    And, should “it was the pundits who were wrong” be “the pundits were the ones who were wrong”?

Leave a Reply

Your email address will not be published. Required fields are marked *