I agreed to give a talk in December for Jared, and this is what I came up with: Election Forecasting: How We Succeeded Brilliantly, Failed Miserably, or Landed Somewhere in Between Andrew Gelman, Department of Statistics and Department of Political Science, Columbia University Several months before the election we worked with The Economist magazine to […]

**Decision Theory**category.

## Piranhas in the rain: Why instrumental variables are not as clean as you might have thought

Woke up in my clothes again this morning I don’t know exactly where I am And I should heed my doctor’s warning He does the best with me he can He claims I suffer from delusion But I’m so confident I’m sane It can’t be a statistical illusion So how can you explain Piranhas in […]

## Reference for the claim that you need 16 times as much data to estimate interactions as to estimate main effects

Ian Shrier writes: I read your post on the power of interactions a long time ago and couldn’t remember where I saw it. I just came across it again by chance. Have you ever published this in a journal? The concept comes up often enough and some readers who don’t have methodology expertise feel more […]

## Calibration problem in tails of our election forecast

Following up on the last paragraph of this discussion, Elliott looked at the calibration of our state-level election forecasts, fitting our model retroactively to data from the 2008, 2012, and 2016 presidential elections. The plot above shows the point prediction and election outcome for the 50 states in each election, showing in red the states […]

## Uri Simonsohn’s Small Telescopes

I just happened to come across this paper from 2015 that makes an important point very clearly: It is generally very difficult to prove that something does not exist; it is considerably easier to show that a tool is inadequate for studying that something. With a small-telescopes approach, instead of arriving at the conclusion that […]

## Information, incentives, and goals in election forecasts

Jessica Hullman, Christopher Wlezien, and Elliott Morris and I write: Presidential elections can be forecast using information from political and economic conditions, polls, and a statistical model of changes in public opinion over time. However, these “knowns” about how to make a good presidential election forecast come with many unknowns due to the challenges of […]

## We want certainty even when it’s not appropriate

Remember the stents example? An experiment was conducted comparing two medical procedures, the difference had a p-value of 0.20 (after a corrected analysis the p-value was 0.09) and so it was declared that the treatment had no effect. In other cases, of course, “p less than 0.10” is enough for publication in PNAS and multiple […]

## Election forecasts: The math, the goals, and the incentives (my talk this Friday afternoon at Cornell University)

At the Colloquium for the Center for Applied Mathematics, Fri 18 Sep 3:30pm: Election forecasts: The math, the goals, and the incentives Election forecasting has increased in popularity and sophistication over the past few decades and has moved from being a hobby of some political scientists and economists to a major effort in the news […]

## They want “statistical proof”—whatever that is!

Bert Gunter writes: I leave it to you to decide whether this is fodder for your blog: So when a plaintiff using a hiring platform encounters a problematic design feature — like platforms that check for gaps in employment — she should be able to bring a lawsuit on the basis of discrimination per se, […]

## Information, incentives, and goals in election forecasts

Jessica Hullman, Christopher Wlezien, and I write: Presidential elections can be forecast using information from political and economic conditions, polls, and a statistical model of changes in public opinion over time. We discuss challenges in understanding, communicating, and evaluating election predictions, using as examples the Economist and Fivethirtyeight forecasts of the 2020 election. Here are […]

## Automatic data reweighting!

John Cook writes: Suppose you are designing an autonomous system that will gather data and adapt its behavior to that data. At first you face the so-called cold-start problem. You don’t have any data when you first turn the system on, and yet the system needs to do something before it has accumulated data. So […]

## More on that Fivethirtyeight prediction that Biden might only get 42% of the vote in Florida

I’ve been chewing more on the above Florida forecast from Fivethirtyeight. Their 95% interval for the election-day vote margin in Florida is something like [+16% Trump, +20% Biden], which corresponds to an approximate 95% interval of [42%, 60%] for Biden’s share of the two-party vote. This is buggin me because it’s really hard for me […]

## Florida. Comparing Economist and Fivethirtyeight forecasts.

Here’s our current forecast for Florida: We’re forecasting 52.6% of the two-party vote for Biden, with a 95% predictive interval of approx [47.0%, 58.2%], thus an approx standard error of 2.8 percentage points. The 50% interval from the normal distribution is mean +/- 2/3 s.e., thus approx [50.7%, 54.5%]. Yes, I know these predictive distributions […]

## 53 fever!

One thing statisticians love is a story about people acting foolish around probability . . . Paul Alper points us to this news article by David Robson: Fifteen years ago, the people of Italy experienced a strange kind of mass hysteria known as “53 fever”. The madness centred on the country’s lottery. Players can choose […]

## Decision-making under uncertainty: heuristics vs models

This post is by Phil Price, not Andrew. Sometimes it’s worth creating a complicated statistical model that can help you make a decision; other times it isn’t. As computer power has improved and modeling capabilities have increased, more and more decisions shift into the category in which it’s worth making a complicated model, but often […]

## Heckman Curve Update Update

tl;dr: “The policy conclusion we draw from our analysis is that age is not a short cut for identifying where governments should, or should not, invest. There are many well‐studied interventions for children that are worthy candidates for public funding based on efficiency considerations. However, the same is also true of many interventions targeting youth […]

## Probabilistic forecasts cause general misunderstanding. What to do about this?

The above image, taken from a site at the University of Virginia, illustrates a problem with political punditry: There’s a demand for predictions, and there’s no shortage of outlets promising a “crystal ball” or some other sort of certainty. Along these lines, Elliott Morris points us to this very reasonable post, “Poll-Based Election Forecasts Will […]

## Getting all negative about so-called average power

Blake McShane writes: The idea of retrospectively estimating the average power of a set of studies via meta-analysis has recently been gaining a ton of traction in psychology and medicine. This seems really bad for two reasons: 1. Proponents claim average power is a “replicability estimate” and that it estimates the rate of replicability “if […]

## On deck through Jan 2021

This should keep you busy through the end of the year . . . Many of these posts were originally scheduled earlier but then got bumped because of coronavirus and other topical material. The typical set and its relevance to Bayesian computation Getting negative about the critical positivity ratio: when you talk about throwing out […]

## Thinking about election forecast uncertainty

Some twitter action Elliott Morris, my collaborator (with Merlin Heidemanns) on the Economist election forecast, pointed me to some thoughtful criticisms of our model from Nate Silver. There’s some discussion on twitter, but in general I don’t find twitter to be a good place for careful discussion, so I’m continuing the conversation here. Nate writes: […]