Colorless green facts asserted resolutely

Thomas Basbøll [yes, I’ve learned how to smoothly do this using alt-o] gives some writing advice:

What gives a text presence is our commitment to asserting facts. We have to face the possibility that we may be wrong about them resolutely, and we do this by writing about them as though we are right.

This and an earlier remark by Basbøll are closely related in my mind to predictive model checking and to Bayesian statistics: we make strong assumptions and then engage the data and the assumptions in a dialogue: assumptions + data -> inference, and we can then compare the inference to the data which can reveal problems with our model (or problems with the data, but that’s really problems with the model too, in this case problems with the model for the data).

I like the idea that a condition for a story to be useful is that we put some belief into it. (One doesn’t put belief into a joke.) And also the converse, that thnking hard about a story and believing it can be the precondition to ultimately rejecting it because its implications don’t make sense. It’s like in chess: the way to refute a move is to consider making the move (which is as irrevocable in a chess context as believing a story is, in the context of basing a social-scientific theory on it.)

I’m also reminded of the advice from Pólya (or somebody like that) about solving math problems. If the question is, “Is statement A true?”, you can try to prove A or find a counterexample. But it’s hard to do both at the same time! Better to take a guess and go from there: if you try and try to prove it and fail, this may give insight into where to find a counterexample (in those folds of the problem that make A so hard to prove); conversely, if you can’t find a counterexample no matter how hard you look, you can try to systematize that search, thus perhaps leading to a proof that no counterexample exists.

23 thoughts on “Colorless green facts asserted resolutely

  1. I never believe in any mathematical model to be true and still find them useful. I think (at least tentatively) that usefulness of mathematical models is totally separated from belief in them.

    • I have heard philosophers say something similar about models of any kind, even maps (like maps of tube stations). I’ve never quite understood what they mean, but that may be because I’m a pragmatist at heart (I don’t really acknowledge the idea of a “truth” that isn’t useful). My view is that a model always implies propositions which are either true or false and that a model that implies false propositions will not be very useful. The only content a mathematical model has is the predictions it makes, and we “believe in” a model if we trust its predictions. Likewise, nobody would say that the territory must “look like” the map in order for the map to be “true”. It just has identify things in the right relative positions (like stations in a rail system). Models are models *of* something, and if they get that something right, they are “true” in the only sense I know.

      • Suppose you have one model of some biological process that involves cellular division, and another that involves the number of cells staying constant while their size gets bigger. Both predict the proper growth rate of the overall organism. Yet one is more true than another. In particular if we were to acquire new data about the number of cells and their size we would determine that perhaps cell division is a better description of growth.

        Often we have some information that we want to use to make inferences about things we can’t observe. In that case some models can be *VERY WRONG* where as others less wrong but both models might be able to fit the observed data very well. Once we fit these models they might both be equally useful for predicting bulk scale growth but only one of them is going to be useful for predicting internal processes that cause the growth.

        • It seems clear that the “truth” and “falsity” of the two models you describe cashes out exactly in the overall accuracy of the predictions they make about observed data. That’s all I’m saying. A particular model is not *false but useful*; it is as useful as it is true. (The cell-growth model must be getting something right, even if it’s wrong about cell division.)

        • Thomas:

          I see what you’re saying, but statistical models are so precise that they will be necessarily false even while having true aspects. For example, suppose we model the heights of adult Americans as a mixture of two normal distributions. This is pretty close to the truth but of course not exactly true. What do you call this model: true or false??

        • Just to underline what Andrew wrote: According to the normal distribution, we predict that with probability zero we will observe in the future any rational number – despite the fact that our measurement instruments can *only* measure rational numbers. This means that every continuous distribution is obviously blatantly wrong. It may be useful anyway.

          By the way, Andrew, even what it means that something is “close to the truth” is difficult enough to explain. If you’d want to make this precise, you’d probably need to postulate that there is at least some true underlying (frequentist?) distribution. Or what kind of truth are you referring to?

        • Christian:

          The distribution I’m thinking of is the set of all heights of American adults (excluding those in wheelchairs etc). There are enough adults, and height is defined imprecisely enough (for example, it varies during the day), that the distribution can be considered continuous. “Close to the truth” can be defined easily enough. There are lots of reasonable ways to define closeness of distributions.

        • My point is that the “truth or falsity” of the model can be thought of as how well the model makes generalizations about *unobserved* data. Both models will be useful for growth predictions, but one will be significantly closer to true (ie. useful in a wider variety and more general set of environments). I think it’s often the case that we want models to make predictions about things that are difficult or impossible to observe (in the simplest formal sense, the parameters of the model). Often because they are nearly impossible to observe, we need the model to perform robustly with situations where we can observe outcomes and then when it does that consistently in a wide variety of situations we interpret the model as relatively close to true.

        • I guess I’m not quite understanding what the model “says”. I’m guessing the model is “false” because, if taken literally, it says that some actual number of people is some actual height. E.g., in this population of 4807 people 102 people are 5’9″? Or that (if it’s a continuous distribution) every individual is some particular height (person number 2307 is 5’9″. But isn’t this model only false on a very “fundamentalist” reading of what it is telling us. If it is saying something about the population, not about individuals within it, can’t it just be (more or less) “true”. As long as it predicts, reasonably accurately, how many people will be above a certain height in a given random sample, isn’t the model just “true” of the population. And if it doesn’t make that prediction accurately, how is it useful?

        • Andrew: If you think of computing a distance between an empirical distribution and a theoretical one, a) you need to be careful which distance you choose (total variation and some other will give you maximum distance between any empirical and any continuous distribution) and b) this only makes sense if you take the i.i.d. assumption for the empirical data granted, which you probably shouldn’t (some of these guys are related to each other). Computing distances between i.i.d. models and real data with potentially arbitrarily complex dependence structures is tedious and I guess that you’ll have to deal with effective sample sizes of one (there is only a single sequence of observations).
          The other thing is that if we can only measure rational data claiming that the true distribution can be taken as continuous is metaphysical.

        • Daniel and Thomas: Couldn’t one do better without using the term “truth” here? If you like a model because it properly predicts the things you are interested in, why not just say “it properly predicts the things I’m interested in” instead of calling it “true”? I think that much confusion about statistics is due to the fact that researchers including many statisticians use misleading terminology about what we can know and what we can’t.

        • Christian and Andrew.

          To sketch this out a bit further and make it Peircedestrian.

          All representations or models are continuous as they represent possibilities not actualities. Roughly between any two representations there is one in between.
          (A Binomial model only allows two outcomes, but the probability of each varies continuously)

          Models work as they can be close enough in some sense to actualities.

          So the Normal distribution allows any real number as a possible outcome but
          any actual number observed is discrete but is close enough in some sense.

        • Christian:

          There’s no iid assumption. I’m just talking about the distribution of people’s actual heights. I think you’re making this more complicated and “foundational” than you need to. There’s an actual distribution (which we do not know, but we can estimate nonparametrically or using lots and lots of parameters) and there’s a simple model (the mixture of normals). The simple model is close (again, there are various reasonable ways to define “close” in this context) but not identical to the actual distribution.

        • Andrew: If you want to model the actual heights of n people, you need a model for n observations, not just for a single one. And such a model needs an assumption about the dependence structure between them, or the absence of it (iid). Or not?

          K.O’Rourke: All fine by me and this all makes sense without making reference to unobservable “truth” (rather, and more interestingly, to “some sense” to be specified).

        • Andrew: I thought that the models and assumptions you were writing about in the original posting would actually include (in)dependence structures, exchangeability or something like this.

        • Christian:

          Yes, you’re right on that. The most recent discussion (regarding the distribution of heights) was a side-conversation motivated by Thomas’s point. I don’t think independence assumptions etc are needed to talk about the distribution of heights, but, yes, I agree that they arise in most of my work.

        • Christian: I wasn’t actually the one who used the word “true” first. I just said we should “believe” what we say and “assert facts”. I didn’t make a big deal out of “truth”. It was you who insisted that models are *not* true, just useful.

          I think it is confusing when a statistician (or any other scientists) goes out of his way to stipulate that an accurate (i.e., predictive) model is “not true”. Or, as Andrew enigmatically puts it: “statistical models are so precise that they will be necessarily false”! What *you* mean by that is probably correct, but most people will assume that you do all your modeling and testing in order to make the model as, well, true to the facts as you can. And I think in the sense that most people mean that, that’s true too.

          If it is not meaningful to call a model “true”, then it isn’t meaningful to call it “false” either. But I think it’s selling the science of statistics short to say that it doesn’t arrive at truths.

        • Thomas: I agree that I’m not very good as a salesperson and that there is a demand for truth, which I’m not going to satisfy. Still I prefer not to use the term “truth” where it isn’t justified, from my point of view, and this refers to probability models to the extent to which you either cannot properly define and observe the truth, or where you can observe something that is in obvious contradiction to what truth ever is captured in a certain model (such in the case of the use of continuous distributions for discrete data).
          When I want to sell statistics, I prefer to go with what makes more sense to me, for example a reference to measurable prediction quality.
          Apart from the sales aspect, I think that the usual way of talking about models (specific ones and in general) as if they could (and should) be true contributes big time to the confusion about statistics among reasonably intelligent non-experts.

        • Christian (at May 2, 2012 at 4:20 pm):
          I agree but this confusion applies to experts (at least occasionally).

          One example was Peter McCullagh assuming continuous observations gave various ancillaries in a Cauchy example I believe (G. Barnard published a critical note on this).

          Another is here: http://radfordneal.wordpress.com/2008/08/09/inconsistent-maximum-likelihood-estimation-an-ordinary-example/ comment 17 raises the issue and the author respond in comment 19 but note other experts commenting that apparently missed the continuity point (literally).

          Thomas (at May 2, 2012 at 4:20 pm):
          The above is one of the reasons we need the *not* true jargon (and maybe less wrong jargon).

          Also, “to say that it [statistics] doesn’t arrive at truths” is to say the truth.
          With uncertainty there are simply not truth values.

          A and notA is not a contradiction – e.g. that an interval does and does not include the true parameter value is represented by a probability (model).

          It is why we are always stuck in the parameter space in statistics whether people would rather plot or think in the data space or not.

        • K O’Rourke: Thanks, that’s interesting. I agree that experts may miss the point, too. Laurie Davies has a nice illustration for the continuity thing in his “Data features” paper using a Neyman-Pearson test to compare a Gaussian distribution with perfectly chosen parameters for body heights in cm with a Poisson(1), which always wins. Of course!(?)

  2. Reminds me of this line from a paper:

    In fact, the violations of the proposed rules are so rare, that for a while a group of us were trying to establish that they were foolproof. At the same time, others in our group were looking for counter-examples [hence the large number of coauthors!].

Comments are closed.