Vivek Mohta asks (in a comment here) the following:
The conclusion [of some research on election forecasting] seems to be that presidential election results turn primarily on the *performance* of the party controlling the White House. The political views of and campaigning by the challenging candidate (within historical norms) have little to no impact on results.
The most recent paper applying this method is The Keys to the White House: Forecast for 2008. I haven’t yet looked at the original paper from 1982 where the method is developed. But there was a reference to his work in the Operations Research Today: “His method is based on a statistical pattern recognition algorithm for predicting earthquakes, implemented by Russian seismologist Volodia Keilis-Borok. In English-language terminology, the technique most closely resembles kernel discriminant function analysis.”
1. The bit about the seismologist is perhaps of historical interest but not so relevant for our understanding. It’s ok to just think about linear and regression.
2. My favorite single thing written on election forecasting is Steven Rosenstone’s 1984 book, Forecasting Presidential Elections. He (and later researchers such as Campbell and Erikson) indeed argue (and is supported by data) that the national election outcome is largely predictable from the recent performance of the economy, with state-to-state variation being mostly consistent from election to election after controlling for home-state and region effects.
3. Rosenstone finds that candidates do benefit slightly by being political moderates–but it’s only a couple of percentage points, so not a huge effect.
4. Campaigns do have effects. However, presidential elections tend to be closely contested in terms of resources, and so the two sides’ campaigns pretty much cancel each other out.
5. The Lichtman stuff is ok in the sense of generally getting things right without having to be quantitative–but it has one thing that really bugs me, which is the attempt to predict the winner of every election. In the past 50 years, there have been 4 elections that have been essentially tied in the final vote: 1960, 1968, 1976, and 2000. (You could throw 2004 in there too.) It’s meaningless to say that a forecasting method predicts the winner correctly (or incorrectly) in these cases. And from a statistical point of view, you don’t want to adapt your model to fit these tossups–it’s just an invitation to overfitting.
To put it another way: suppose his method mispredicted 1960, 1968, and 1976. Would I think any less of this method? No. A method that predicts vote share (such as used by political scientists) could get credit from these close elections by predicting the vote share with high accuracy. Again, I see virtue in the simplicity of Lichtman’s method, but let’s be careful in how to evaluate it.
(I made the above point here (see the last full paragraph on page 120) in my 1993 review of Lewis-Beck and Rice’s book on forecasting elections.)
6. If your goal really is forecasting, and you have the technical sophistication of an operations researcher, you should definitely be forecasting vote share (at the national level, or even better, by state) rather than just the winner. Lots of information gets lost by converting a continuous outcome into binary.