Nooooooooooooooooooo!

Michael Axelrod writes:

Quantitative historian Allan Lichtman claims to have discovered 13 predictors that determine who will win the popular vote in presidential elections. He predicts Obama will win in 2012. Writing in his New York Times column, “538,” Nate Silver attempted a critique Lichtman’s prediction. Soon afterward Lichtman wrote a rejoinder. Evidently Lichtman has correctly and publicly predicted the popular vote winners in the last 7 presidential elections. I think he predicted Gore would win in 2000. He got the popular vote winner right, but not electoral college vote winner. Lichtman presents his methods in his early 1980s book, “The Keys to the White House.” Lichtman consulted with Volodia Keilis-Borok, and used a kernel discriminant analysis approach on election results from 1860-1980 as the training set. I think there is some argument as to scoring because Lichtman claims more than 7 successes. I guess he divided the data into a training and validation sets and wants credit for the validation. Did he do what Edward Leamer calls a “specification search” with all the pitfalls? I don’t know without more reading.

Let’s give him credit for 7 out of 7. How good is this? I don’t think it’s very good based on your 1993 paper on why presidential polls are so variable when the vote is so predictable from political science variables. If we can generally predict the popular vote to within a few percent a year ahead of the election, we don’t need those 13 variables he teased out of the data. Nevertheless I think the proper method of how we score predictions is of interest. It’s pretty easy to predict rain or no rain in the desert. Just say “no rain” and you will get a high and perhaps perfect score. What we would like to know is how much better Lichtman does than a naive oracle where the oracle can be pretty good. Incumbents win 70% of the time in presidential elections (since 1860). In other words, how much does that 7 out of 7, or say n out of m where n is very close to m, tell us about the added information? What does it tell us about the probability that the next prediction will be correct?

I wonder if you have given this problem any thought?

My reply:

See here. Also here.

4 thoughts on “Nooooooooooooooooooo!

  1. 3. Rosenstone finds that candidates do benefit slightly by being political moderates–but it’s only a couple of percentage points, so not a huge effect.

    Would have determined the 2000 and 2004 elections.

  2. Allan Lichtman may have it right again this time too. Unfortunately for the right, they seem stuck on power, not at all on what’s best for the country. If we simply boil it down to their understanding of the economy, they’ve got it all wrong. They think lower taxes is the answer to everything economy, especially jobs. Well, simply put, the Obama administration has kept the Bush tax cuts from 2001 in place and the job creating machine that the GOP claims results from lower taxes has not happened. Really? We already have lower taxes in place for the higher tax brackets…where are the jobs?

    The GOP is lost. They don’t know what their message is or who’s best suited to be their messenger. So they’re wildly swinging for the fences. They’re more caught up in “Hope” than the Obama administration.

  3. Looking not at Lichtman, but at Axelrod’s question about judging prediction: in meteorology, the figure of merit for forecasting is called “skill:” how much better does your fancy method do than a naive forecast (the weather 72 hours from now will be the same as it is today (persistence), the probability of rain 72 hours from now will be the probability of rain on that calendar day in the average year (climatology), or something like that).

    The concept of forecast skill is at the heart of Axelrod’s question. I only know this term in the context of meteorology and climatology. What jargon do statisticians more generally use for this concept?

Comments are closed.