Information, incentives, and goals in election forecasts

Jessica Hullman, Christopher Wlezien, and I write:

Presidential elections can be forecast using information from political and economic conditions, polls, and a statistical model of changes in public opinion over time. We discuss challenges in understanding, communicating, and evaluating election predictions, using as examples the Economist and Fivethirtyeight forecasts of the 2020 election.

Here are the contents of the article:

1. Forecasting presidential elections

1.1. Forecasting elections from political and economic fundamentals

1.2. Pre-election surveys and poll aggregation

1.3. Putting together an electoral college forecast

2. Communicating and diagnosing problems with probabilistic election forecasts

2.1. Win probabilities

2.2. Visualizing uncertainty

2.3. Other ways to communicate uncertainty

2.4. State and national predictions

2.5. Replacement candidates, vote-counting disputes, and other possibilities not included in the forecasting model

3. Calibration and incentives

3.1. The difficulty of calibration

3.2. Incentives for overconfidence

3.3. Incentives for underconfidence

3.4. Comparing different forecasts

3.5. Martingale property

3.6. Novelty and stability

4. Discussion

I like this paper. It gathers various thoughts we’ve had about information underlying election forecasts, how we communicate and understand these predictions, and some of the incentives that lead to different forecasts having different statistical properties.

We thank Joshua Goldstein, Elliott Morris, Merlin Heidemanns, Dhruv Madeka, Yair Ghitza, Doug Rivers, Bob Erikson, Bob Shapiro, and Jon Baron for helpful comments and various government agencies and private foundations for supporting this research.

10 thoughts on “Information, incentives, and goals in election forecasts

  1. My election night drinking game: take a sip for each null hypothesis p-value analysis of the results available on election night; take two sips for each incorrect explanation/interpretation of the p-value. On an unrelated note, I have to update my will.

  2. Bottom of page 5:

    “If we bump Biden’s predicted 2-party vote down to 53.5%, we get a probability Obama wins of Φ((0.545 − 0.517)/0.02) = 0.816, now just an 80% chance.”

    I think you mean “…probability *Biden* wins…”. Also, it should be “0.535 − 0.517”.

    Great paper, but there are at least a few other typos throughout.

  3. Reads well and I think for anyone interested in general statistical issues.

    > we see forecasting as an essentially collaborative exercise.
    Before 2007/8, when talking about meta-analysis I found it very strange that almost no one seemed to be doing meta-analysis of reported polls.

    At some point (early in 2008), I contacted Sam Wang at Princeton who seemed to be an exception and asked him why he thought most were avoiding it. If I recall correctly, he said it mostly due to a competitive stance with most arguing other polls would just ruin their much better one. Here, Wang suggests combining became only popular in 2008 https://web.math.princeton.edu/~sswang/wang15_IJF_origins-of-poll-aggregation.pdf

    Now, to me meta-analysis never meant taking weighted averages or even obtaining combined estimates, but rather trying to make the most sense one could of multiple endeavors and the inputs they generated.

    > Combining forecasts more formally is an intriguing idea
    So if I am getting right, this would be trying to make the most sense one could of multiple modelling endeavors and the outputs they generated.

  4. > the Economist forecast does not include a generative model for time changes in the predictors of the fundamentals model—that is, the prediction at time t is based on the fundamentals at time t, not on the forecasts of the values these predictors will be at election day

    Is the model uncalibrated and underreports uncertainty? Or do you account for that flaw somehow or expect the effect to be offset by other flaws in the model or have estimated the effect to be negligible?

    > How many decimal places does it make sense to report the win probability?

    Maybe you accidentally a word?

  5. This is a really interesting paper that discusses statistical issues that affect everyone who makes predictions with regression models.

    > We argue that all these sources of information are necessary, and if any are not included, the forecaster is implicitly making assumptions about the missing pieces

    Necessary in which sense, to make good predictions? Isn’t this just a case of P(Y|X) being on average further from the truth than P(Y|X,Z) when Z has predictive value conditional on X? I don’t think we really need to assume something about information we don’t use when we condition on some other information. For instance, I think it would be interesting in its own right to study P(Y|X) where X is fundamentals but no polls and see what you get for that. The literature discussed in the paper seems to indicate this already does substantially better than a 50% prediction would. It’s just that you could do far better still when adding polls to the analysis.

    > Given all our uncertainties, it would seem pretty ridiculous to claim we could forecast to that precision anyway, right?
    If I understand this right, the win probability in this example is conditional on a mu and sigma for Biden’s vote share and it is pointed out that it changes a lot if mu=0.535 instead of mu=0.54. But why can’t you include uncertainty about mu into the win probability by averaging over its posterior distribution? Doesn’t the resulting unconditional win probability adequately reflect if the conditional-on-mu win probability is very sensitive to mu and we can’t pin mu down very precisely?

    >that is, the prediction at time t is based on the fundamentals at time t, not on the forecasts of the values these predictors will be at election day and thus we would not expect these predictions to satisfy the martingale property
    Is this necessary to satisfy the martingale property? You could simply model P(Biden wins| fundamentals_t) which could be allowed to change with t. I thought this is what 538 and the Economist already do with polls: If there was no polling error at all, P(Biden wins | vote_share_t) depends on how close t is to the election. I think if this model was correct, its stream of win probabilities would be a martingale.

  6. Andrew, Jessica, and Christopher — Congratulations on the wonderful paper. I just wanted to add for blog readers benefit what was especially impressive for me, which was seeing, from a bit behind the scenes, the rapidity and creativity of seeing this paper formed, My first exposure was a few weeks ago with Andrew by email. He suggested a paper, found collaborators, offering co-authorship to everyone in the discussion, and then wrote it all up. There are many ideas here, ranging from comments on contemporary journalism and modeling to stylized but very effective models that give important insights. Bravo!

Leave a Reply to Carlos Ungil Cancel reply

Your email address will not be published. Required fields are marked *