Chris Arderne writes: Surprised to see you hadn’t yet discussed the Oxford/AstraZeneca 60%/90% story on the blog. They accidentally changed the dose for some patients without an hypothesis, saw that it worked out better and are now (sort of) claiming 90% as a result… Sounds like your kind of investigation? I hadn’t heard about this […]

**Decision Theory**category.

## Is vs. ought in the study of public opinion: Coronavirus “opening up” edition

I came across this argument between two of my former co-bloggers which illustrates a general difficulty when thinking about political attitudes, which is confusion between two things: (a) public opinion, and (b) what we want public opinion to be. This is something I’ve been thinking about for many years, ever since our Red State Blue […]

## Can we stop talking about how we’re better off without election forecasting?

This is a public service post of sorts, meant to collect some reasons why getting rid of election forecasts is a non-starter in one place. First to set context: what are the reasons people argue we should give them up? This is far from an exhaustive list (and some of these reasons overlap) but a […]

## Bayesian Workflow

Aki Vehtari, Daniel Simpson, Charles C. Margossian, Bob Carpenter, Yuling Yao, Paul-Christian Bürkner, Lauren Kennedy, Jonah Gabry, Martin Modrák, and I write: The Bayesian approach to data analysis provides a powerful way to handle uncertainty in all observations, model parameters, and model structure using probability theory. Probabilistic programming languages make it easier to specify and […]

## What happens to the median voter when the electoral median is at 52/48 rather than 50/50?

Here’s a political science research project for you. Joe Biden got about 52 or 53% of the two-party vote, which was enough for him to get a pretty close win in the electoral college. As we’ve discussed, 52-48 is a close win by historical or international standards but a reasonably big win in the context […]

## What would would mean to really take seriously the idea that our forecast probabilities were too far from 50%?

Here’s something I’ve been chewing on that I’m still working through. Suppose our forecast in a certain state is that candidate X will win 0.52 of the two-party vote, with a forecast standard deviation of 0.02. Suppose also that the forecast has a normal distribution. (We’ve talked about the possible advantages of long-tailed forecasts, but […]

## Don’t kid yourself. The polls messed up—and that would be the case even if we’d forecasted Biden losing Florida and only barely winning the electoral college

To continue our post-voting, pre-vote-counting assessment (see also here and here), I want to separate two issues which can get conflated:

## Why it can be rational to vote

I think I can best do my civic duty by running this one every Election Day, just like Art Buchwald on Thanksgiving. . . .

## Sh*ttin brix in the tail…

After my conversation with Andrew yesterday about The Economist election forecasting model I got curious about how G. Elliot, Merlin and Andrew want their prediction to be assessed given the menu of strange contingencies we have in front of us. I checked Betfair rules for some guidance: This market will be settled according to the candidate […]

## So, what’s with that claim that Biden has a 96% chance of winning? (some thoughts with Josh Miller)

As indicated above, our model gives Joe Biden a 99+% chance of receiving more votes than Donald Trump and a 96% chance of winning in the electoral college. Michael Wiebe wrote in to ask: Your Economist model currently says that Biden has a 96% chance of winning the electoral college. How should we think about […]

## Concerns with our Economist election forecast

A few days ago we discussed some concerns with Fivethirtyeight’s election forecast. This got us thinking again about some concerns with our own forecast for The Economist (see here for more details). Here are some of our concerns with our forecast: 1. Distribution of the tails of the national vote forecast 2. Uncertainties of state […]

## Prediction markets and election forecasts

Zev Berger writes: The question sounds snarky, but it’s not meant in that vein. It’s instructive to hear how modelers understand the predictions of their models, which is something I am still trying to think through. Your model has the chance of Biden being elected at 0.95. Predictit has Biden at 0.60. Given the spread, […]

## “Valid t-ratio Inference for instrumental variables”

A couple people pointed me to this recent econometrics paper, which begins: In the single IV model, current practice relies on the first-stage F exceed- ing some threshold (e.g., 10) as a criterion for trusting t-ratio inferences, even though this yields an anti-conservative test. We show that a true 5 percent test instead requires an […]

## Body language and machine learning

Riding on the street, I can usually tell what cars in front of me are going to do, based on their “body language”: how they are positioning themselves in their lane. I don’t know that I could quite articulate what the rules are, but I can tell what’s going on, and I know that I […]

## Reverse-engineering the problematic tail behavior of the Fivethirtyeight presidential election forecast

We’ve been writing a bit about some odd tail behavior in the Fivethirtyeight election forecast, for example that it was giving Joe Biden a 3% chance of winning Alabama (which seemed high), it was displaying Trump winning California as in “the range of scenarios our model thinks is possible” (which didn’t seem right), and it […]

## “Election Forecasting: How We Succeeded Brilliantly, Failed Miserably, or Landed Somewhere in Between”

I agreed to give a talk in December for Jared, and this is what I came up with: Election Forecasting: How We Succeeded Brilliantly, Failed Miserably, or Landed Somewhere in Between Andrew Gelman, Department of Statistics and Department of Political Science, Columbia University Several months before the election we worked with The Economist magazine to […]

## Piranhas in the rain: Why instrumental variables are not as clean as you might have thought

Woke up in my clothes again this morning I don’t know exactly where I am And I should heed my doctor’s warning He does the best with me he can He claims I suffer from delusion But I’m so confident I’m sane It can’t be a statistical illusion So how can you explain Piranhas in […]

## Reference for the claim that you need 16 times as much data to estimate interactions as to estimate main effects

Ian Shrier writes: I read your post on the power of interactions a long time ago and couldn’t remember where I saw it. I just came across it again by chance. Have you ever published this in a journal? The concept comes up often enough and some readers who don’t have methodology expertise feel more […]

## Calibration problem in tails of our election forecast

Following up on the last paragraph of this discussion, Elliott looked at the calibration of our state-level election forecasts, fitting our model retroactively to data from the 2008, 2012, and 2016 presidential elections. The plot above shows the point prediction and election outcome for the 50 states in each election, showing in red the states […]

## Uri Simonsohn’s Small Telescopes

I just happened to come across this paper from 2015 that makes an important point very clearly: It is generally very difficult to prove that something does not exist; it is considerably easier to show that a tool is inadequate for studying that something. With a small-telescopes approach, instead of arriving at the conclusion that […]