Anthony Goldbloom from Kaggle writes:
We’ve recently put up some interesting new competitions. Last week, Jeff Sonas, the creator of the Chessmetrics rating system, launched a competition to find a chess rating algorithm that performs better than the official Elo system. Already nine teams have created systems that make more accurate predictions than Elo. It’s not a surprise that Elo has been outdone – the system was invented half a century ago before we could easily crunch large amounts of historical data. However, it is a big surprise that Elo has been outperformed so quickly given that it is the product of many years’ work (at least it was a surprise to me).
Rob Hyndman from Monash University has put up the first part of a tourism forecasting competition. This part requires participants to forecast the results of 518 different time series. Rob is the editor of the International Journal of Forecasting and has promised to invite the winner to contribute a discussion paper to the journal describing their methodology and giving their results (provided the winner achieves a certain level of predictive accuracy).
Finally the HIV competition recently ended. Chris Raimondi describes his winning method on the Kaggle blog.
Cool stuff. On the chess example, I’m not at all surprised that Elo has been outperformed. Published alternatives have been out there for years. We even have a chess example in Bayesian Data Analysis (in the first edition, from 1995), and that in turn is based on earlier work by Glickman.
I like this competition idea and would like to propose some ideas of our own, through the Applied Statistics Center. I’m thinking that if done right, this could be the basis of a Quantitative Methods in Social Sciences M.A. thesis. In any case, it would be a great way for students to get involved.