“All Models are Right, Most are Useless”

The above is the title of a talk that Thad Tarpey gave at the Joint Statistical Meetings in 2009. Here’s the abstract:

Students of statistics are often introduced to George Box’s famous quote: “all models are wrong, some are useful.” In this talk I [Tarpey] argue that this quote, although useful, is wrong. A different and more positive perspective is to acknowledge that a model is simply a means of extracting information of interest from data. The truth is infinitely complex and a model is merely an approximation to the truth. If the approximation is poor or misleading, then the model is useless. In this talk I give examples of correct models that are not true models. I illustrate how the notion of a “wrong” model can lead to wrong conclusions.

I’m curious what he had to say—maybe he could post the slides?

P.S. And here they are!

19 thoughts on ““All Models are Right, Most are Useless”

  1. I’m interested to see what was said at the talk as well. I’ve always been a bit annoyed by that Box quote, as its always seemed to me to me fairly obviously a daft thing to say. If all models are wrong then you haven’t got the right definition of wrong; a useful definition of “wrong” should allow us to say that some models are right and some wrong.

    A model is just a simplification of reality that helps one to make decisions about how to act, it should also be simple and easy for those making the decisions to understand. A model is right if it accurately reflects reality within the part of reality of interest and at the appropriate resolution. Here’s an example:
    Model 1: In the winter its cold and in the summer its hot.
    Its a correct model for temperate climates, it is simple to explain and understand. It allows users to make decisions: if winter is approaching I should get my boiler serviced, and look out my warm coat, if summer is approaching I should get the BBQ out of the garage. It would not be appropriate to use the model to predict temperature on specific days, but it would be appropriate to predict from it that the average temperature in July will be greater than the average temperate in December. You do, however, have to be careful about the definition of “summer” and “winter” which vary depending on whether you are in the northern hemisphere or the southern hemisphere.

    • Your example of summer and winter strike me as a good example of Box’s point. It is not actually ‘wrong’ but it is very good. Conversely it could be considered right but not very good (perhaps Tarpey’s point?)

      I think the point of Box’s statement is that no model is going to perfectly reflect the data or provide perfect prediction. It’s a little like the map and the physical system arguement. A perfect map = the physical system

      If winter’s approaching, as you say, you get boiler serviced. But that level of model does not predict your fuel cost or the difference of winter in Melbourne Au vs in Whitehorse Canada.

      I have no problem with Box’s formulation though I would like to see Tarpey’s discussion. The differences may not be that great.

    • it seems to me like even the idea of a model being “right” or “wrong” is a bit antiquated, perhaps stemming from Aristotelean logic and the likes. to me, if there is a Truth, it doesn’t even matter that much because we could never confirm that we obtained it. moreover, obtaining it might not be so desirable, it depends on one’s desiderata; mine are often predictive in nature, not descriptive. in such scenarios, we search for optimal models, where optimality is defined with respect to some objective criterion. then, one is free to put as many or as few assumptions as one desires, because it is clear from the get-go that we are not striving for Truth, but rather, performance.

  2. Is there a reason why Box restricted himself to the term ‘model’?
    Why not
    * all explanations are wrong, some are useful.
    * all reasoning is wrong, some is useful.
    * most proofs are wrong, some are useful.

  3. I find the Box quote pithy, but frustratingly incomplete. It begs the question “How can a wrong model be useful?” Which isn’t a question with a brief answer, because models (of all sorts, not just statistical) can be wrong in lots of different ways, and not all those ways of being wrong are useful, or useful for all purposes. The quote also tends to mislead students, who read Box as saying that some models are useful despite being wrong, when in fact models often are useful *because* they’re wrong (e.g., as when a model ignores factors of minor importance, thereby isolating and clearly illustrating the important effects of the included factors).

  4. Continuum Mechanics in physics is a good example of well-developed models. Typical models such as Fluid Mechanics, ignore elector-magnetic phenomenon altogether. For example, an elastic model of a solid has nothing to say about the color of the solid. Only it’s mechanical properties.

    It that sense they are incomplete. But it would be a bit bizarre to claim they’re all wrong. All these models make tons of predictions about pressures or responses to stresses or the time evolution of the material. These predictions have to be right or the model is rejected.

    A better phrase might be:

    “All models are incomplete, but good models are right about something important”.

    • Yes. A better formulation but not as catchy as Box’s . :) I suspect that is what he meant, that’s, certainly, how I interpret it but I have not read the orginal, just some quotes.

      On the other hand, are there any models that come to mind that are so wrong as to be harmful?

      • “On the other hand, are there any models that come to mind that are so wrong as to be harmful?”

        In my own field of ecology, absolutely:

        http://oikosjournal.wordpress.com/2011/06/17/zombie-ideas-in-ecology/

        And in economics, Paul Krugman and others have been making the case for several years that whole classes of macroeconomic models have been positively harmful guides to policy. I’m not qualified to judge the economic arguments. But I suspect with a bit more thought it wouldn’t be difficult at all to pile up examples of models that are so wrong as to be harmful.

  5. I think Monty Python made a useful contribution here:

    King Arthur: [in awe] Camelot!
    Sir Galahad: [in awe] Camelot!
    Sir Lancelot: [in awe] Camelot!
    Patsy: [derisively] It’s only a model!

  6. jrkrideau, there have been cases in psychology where the reseach has been wrong (or at least confused) for decades because of incorrect models. An example that comes to mind concerns confidence ratings. See for example,

    Simultaneous over- and underconfidence: The role of error in judgment processes
    http://psycnet.apa.org/journals/rev/101/3/519/

    I really don’t get all the hate on the quote. I think it is great if only as reminder that one should understand the limits of their model and the implication of these limits.

  7. Going trough Tarpey’s slides, I also noticed his interpretation of (frequentist) confidence intervals, as having a 95% chance of containing the true parameter. Which, as he also notes, is contrary to every statistics text out there.
    I find the example that he provides convincing though.
    If you tell someone to pick a card and that person picks an Ace, then the p(Ace) is 1 for the person who looked at the card, but for you it’s 4/52. Same for the (frequentist) confidence interval: if you draw one particular sample and construct the confidence interval, it does contain the true parameter or it does not: p either 0 or 1, but from the observer’s perspective their is uncertainty about this particular confidence interval – so why not put a number on it: p=.95. Maybe this is an improper Bayesian perspective, but I never quite understood the frequentist perspective saying that this statement about a confidence interval is not allowed. Tarpey’s example and perspective make sense to me.

  8. I note the following from the presentation:

    … it is tempting to say there is a
    ‘probability’ of .95 that π is between
    .499 and .561. Do not yield to this
    temptation!…Any specific interval …
    either includes π or it does not…We
    cannot make a chance statement
    concerning this particular interval

    I wondering if mixture models, which depend upon mixing constants scaled to latent variables, are also not misspecified, as this is essentially what they are doing–taking an observation which is in one and only one regime and assigning it a probability that the observation is in any of the regimes (unless that assignment is unity for one regime and zero for the others).

  9. I like Thad Tarpey’s idea of focusing on the positive: “all models are right”. However, it occurs to me that a model with a 0% confidence interval is just plain wrong. Or is it? Can we justify Tarpey’s idea even in this extreme case? Let’s examine a deliberately wrong model. One that is always off. Does it have a modicum of truth in it? It has tremendous value: we can use it to find out what won’t happen with 100% confidence! OK, the other boundary case is 50% confidence. Is there truth in that? Sure, half the time: the allure of gambling!

    At least I’ve convinced myself that Tarpey is correct: all models are right. But I disagree that most are useless. Even deliberately wrong models (and 50% models) tell something useful about the world!

    So all models are useful and right! QED.

    Of course, Box is correct too: all models are useless and wrong from some perspective. That’s the nature of this beast! Thank you to Thad Tarpey for raising the issue.

  10. Seems to me that Tarpey is really a closet Jaynesian but too steeped in orthodox education and too invested in the idea that the universe is random to realize it. On the one hand he falls into the mind projection fallacy by writing down a probability distribution for an observer-independent “truth”; on the other hand he argues for probability as a measure of the information state of the observer – he is conflicted between the idea of probability as an empirical quantity (as used by everyone around him) and the idea (a la Bernoulli, Laplace, Jeffreys, Cox, Turing, Shannon, Jaynes…) of probability as a way of quantifying an information state.

  11. Pingback: “All Models are Right, Most are Useless” « Another Word For It

  12. Pingback: Yes, all models are wrong…that totally misses the point « House of Stones

Comments are closed.