Trafalgar polls

There’s been some discussion of the poor performance of the Trafalgar Group polls in 2022. This article by Elliott Morris provides some context, and this post by Lakshya Jain, Michael Lee, Armin Thomas, and Harrison Lavelle discusses some methodological concerns.

Reading about the Trafalgar Group reminds me of a discussion we had a couple years ago regarding an economist who decided to look into election forecasting and wrote an article for a site called thefederalist.com (sharing a page with the amusingly-titled “Disney’s ‘Cruella’ Tells Girls To Prioritize Vengeance Over Love” and the more chilling “Pork-Stuffed Bill About To Pass Senate Enables Splicing Aborted Babies With Animals”), where he wrote that “renegade pollsters Democracy Institute and Trafalgar . . . can be proud of the accuracy of their much-maligned forecasts of the 2020 election.”

It turned out that Trafalgar didn’t do so great in 2020:

They forecast Biden to win 235 electoral votes. Biden actually won 306. Our Economist model gave a final prediction of 356. 356 isn’t 306. We were off by 50 electoral votes, and that was kind of embarrassing. We discussed what went wrong, and the NYT ran an article on “why political polling missed the mark.” Meanwhile, Trafalgar’s forecast was off by 71 electoral votes.

Being off by 71 electoral votes isn’t horrible—the 2020 election was notoriously difficult to forecast using polling—but it’s not a triumph of accuracy either. So when writing about the story of those polls in 2022, we should remember they didn’t do so wonderfully in 2020 either, even if authors of articles about Cruella and pork-splicing pronounced otherwise.

8 thoughts on “Trafalgar polls

  1. Just the other day, Andrew wrote about the problems caused by his “crappy” bathroom scale:

    https://statmodeling.stat.columbia.edu/2023/01/06/god-is-in-every-leaf-of-every-tree-bathroom-scale-edition/

    As I understand it, the key point was the zero setting, leading to

    “From a statistical perspective, the point is that the uncertainty regarding the bias of the measurements has gotta be at least something in the range of the standard deviation of the measurements—not the standard error, which scales like 1/sqrt(n).”

    “the big-data paradox, that the effective uncertainty does not go down like 1/sqrt(n), and if you act as if it does, you’ll make statements that are highly confident and wrong.”

    With regard to polling, do we have the same kind of problem? Increasing the sample size is far less important than zeroing to get meaningful samples? The “bias of the measurements” is that most people refuse to pick up the phone/answer the questions. Those who do, may not be typical.

  2. > They forecast Biden to win 235 electoral votes. Biden actually won 306. Our Economist model gave a final prediction of 356. 356 isn’t 306. We were off by 50 electoral votes, and that was kind of embarrassing. We discussed what went wrong, and the NYT ran an article on “why political polling missed the mark.” Meanwhile, Trafalgar’s forecast was off by 71 electoral votes.

    Is there a better way to measure the comparative magnitude of error then just a flat comparison of electoral votes discrepancy?

    For example, a small error in a large state could result in being off by more electoral votes than multiple, very large errors in multiple states. In that situation, where was there more error?

    • Joshua:

      I don’t intend that remark of mine to represent a formal comparison. I think it’s enough to render questionable the Chicago economist’s claim that “renegade pollsters Democracy Institute and Trafalgar . . . can be proud of the accuracy of their much-maligned forecasts of the 2020 election.” You could make the argument that Trafalgar was no worse than some other polls in 2020, but “proud of the accuracy” is a bit rich considering that elsewhere he’s slamming the regular polls for inaccuracy.

      • >You could make the argument that Trafalgar was no worse than some other polls in 2020

        I think this is possible to do. Polling is hard, I get it. The bigger problem is the pedal to the metal messaging (to a Right — and especially New Right — audience) meant that the comedown in ’22 was also going to be more painful because Trafalgar’s strategy was not one of cultivating a diverse audience (read: it was higher risk).

        They’ll be Tra-fail-gar in my book until they have a better cycle.

    • We could simply look at which states would need to be flipped to get to the predictions and their vote margins.

      Biden winning North Carolina (Trump +1.3%) and Florida (Trump +3.4%) would be enough for 350.
      But then, the next closest state was Texas (Trump +5.6%) while the reason why the Economist predicted 356 was because they projected Iowa (Trump +8.2%) for Biden.

      On the contrary, while Trump would need to flip more states to get Biden all the way down to 235,
      winning Georgia (Biden +0.2%), Arizona (Biden +0.3%), Wisconsin (Biden +0.6%), Pennsylvania (Biden +1.2%), and Nevada (Biden +2.4%) gets Biden to 242 and finally, Trump winning Michigan (Biden +2.8%) would put Biden at 226.

      While I share Andrew’s distrust of Trafalgar, if I had to choose who was more wrong in 2020 (as much as it matters, which may not be a lot), I would go with the Economist.

      • > Biden winning North Carolina (Trump +1.3%) and Florida (Trump +3.4%) would be enough for 350.
        > But then, the next closest state was Texas (Trump +5.6%) while the reason why the Economist predicted 356 was because they projected Iowa (Trump +8.2%) for Biden.

        Huh? The Economist projected Iowa goes to Trump. With only a 55% probability, sure, but it’s there.

        I mean overall the Economist projected a pretty wide spread of results which easily included the actual result. Trafalgar was “sure” Trump would win overall. If your central forecast is 235 Biden votes and you’re sure the number is gonna be below 265, when the real number is 305, that suggests your model is extremely poorly tuned.

        • In fact, if you do some simple maths and make use of the “Biden 97% likely to win”, the Economist estimate is 356 with a standard error of 46.25. So being off by 50 electoral votes is totally reasonable – you get the same or bigger divergences with a probability of 28%.

          If we be generous and assume that Trafalgar’s “sure Trump win” translates to only a 90% chance of Trump winning, then that implies their model standard error is 26.5. So the result on the day is 2.6 standard errors out from their estimate. A probability 0.0083 event. Their model is garbage.

Leave a Reply

Your email address will not be published. Required fields are marked *