Bayes, Popper, and Kuhn

Since I’m referring to other people’s stuff, let me link to a recent entry on statistics and philosophy in Dan Navarro’s blog. Which I pretty much agree with except for the remark that Bayesian statistics is Kuhnian. I disagree strongly–I think Bayesian data analysis is Popperian. But, then again, Sander Greenland disagrees with me. So who knows?

I’ve always been turned off by Kuhn’s philosophy of science and preferred Popper’s more hard-line falsificationist attitude. And I see Bayesian data analysis as fitting well into the falsificationist approach–as long as we recognize that BDA includes model checking (as in Chapter 6 of our book) as well as Bayesian inference (which, as far as I can tell, is purely deductive–proceeding from assumptions to conclusions). Yes, we “learn,” in a short-term sense, from Bayesian inference–updating the prior to get the posterior–but this is more along the lines of what a Kuhnian might call “normal science.” The real learning comes in the model-checking stage, when we can reject a model and move forward. The inference is a necessary stage in this process, however, as it creates the strong conclusions that are falsifiable.

9 thoughts on “Bayes, Popper, and Kuhn

  1. Popper as a hard-line falsificationist? IMHO, he just substituted his own metaphysical proposition (falsification) while trying to demolish others. I say metaphysical because it's not clear under what conditions his theory of falsification could be refuted.

  2. I think the sense that the Bayesian approach is Kuhnian is that they both acknowledge science as being subjective – something Popper denies. I think one can be a Bayesian within the Kuhnian framework, but I'm not sure much needs be said beyond that.

    As for Popper, his ideas were, well, falsified in the 60s. Science has a positivist element to it – we do confirm theories, which Popper could never accept.

    Popper also rejected the interpretation of porbability as subjective, on the grounds that they were not empirical.

    There are a couple of more recent attempts to create philosophies of science based on statistical theory, one of which is Bayesianism. If you really want to lose your soul, a good introduction is Chalmers' "What Is This Thing Called Science?".

    Bob

  3. The way I've always seen it was that popper was proposing a normative theory of good science. While Kuhn's point was that if you look at the way science has actually developed, it does not fit the Popper model. Instead you tend to see people working within certain paradigms of thought, deepening them, until the paradigms are overthrown by growing inconsistencies with data and competition from emerging paradigms.

    I don't think this interpretation implies that science is subjective in Kuhn – you can't just come up with any old rubbish and label it science – there are internal rules for each paradigm, and ultimately unsuccessful paradigms lose influence, partly because science can be exploited technologically, and better science on the whole generates advances in technologies.

    On the whole though I think popper's argument fails because he does not actually identify correctly what is central to good science – the fact that the process of scientific enquiry is one of rational discussion governed, ultimately, by appeal to the facts. It is the process that matters, not whether scientists can draw a tight dividing line between good and bad science.

    To put it another way – the messy but mainly rational discourse of science gets there in the end, because it generates new ideas, new methods, and over time weeds out arguments and theories that do not 'work' as well – even if this is not always achieved by applying strict falsificationist procedures.

    A corollary argument would be that attempting to apply strict falsifications across a discipline (eg social sciences) may not always be beneficial. But – that's a whole new bundle of issues to do with pluralism, the problems of relating theoretical variables to observable quantities, the use of 'false' models as heuristics or parables, and so on.

  4. Wow–I didn't know that so many people were down on Popper. Despite what Bob says, I don't think Popper's ideas were "falsified". But I'll clearly have to refine my arguments and come back later and take another whack at this one. And I'll have to do a lot more reading, probably starting with whatever Bob recommends.

    Here's the short form of my argument: I don't see Bayesian statistics as subjective. I see a Bayesian model as a clear set of assumptions, whose implications are then deductively evaluated. The assumptions can be subjective but they need not be–except to the extent that all statistical procedures are "subjective" in requiring some choice of what to do.

    In my view, a key part of Bayesian data analysis is model checking. This is where I see the link to falsification. To take the most famous example, Newton's laws of motion really have been falsified. And we really can use methods such as chi-squared tests to falsify little models in statistics. Now, once we've falsified, we have to decide what to do next, and that isn't obvious. A falsified model can still be useful in many domains (once again, Newton's laws are the famous example). But I like to know that it's falsified.

  5. I hope I didn't sound too down on Popper. The procedure you describe seems to me completely logical, and a perfectly rational view of how falsification makes for good science.

    The problem though arises if one sees a strict falsificationsist approach as the correct 'one size fits all approach' for good science. That is when it starts to get murky.

    There is a whole bundle of issues here, related mainly to social science (which I know better than 'harder' sciences) :

    – the problem that social science deals with mechanisms, not laws, and so must perhaps be less ambitious (cf Jon Elster's work) (it comes closer to trying to tell the right story about how something happened, rather than finding causal laws)

    – the duhem-quine problem

    – the difficulties associated with reductionism in models of social behaviour

    – the theoretical status of using 'false' models as the core heuristics in a discipline (economics?)

    – the fact that social science looks at behaviour of conscious people that interact strategically, where context matters, and so does history.

    All of which means that the nature of explanation may be different in social science, and that the methodology of the disciple may have to reflect that.

  6. A few comments:

    Firstly, on the subjectivity issue, the Bayesian and Kuhnian paradigms both explicitly acknowledge that they are subjective (e.g. through priors), whereas other philosophies try to deny it, which strikes me as being like trying to deny that science is done by human beings. I read somewhere the idea that the Bayesian approach is rational (in the way it treats subjectivity), which I think is a nice distinction.

    A comment on rjw's comments – most of the issues he raises are familiar to me working in ecology and evolutionary biology. I have a paper in press arguing that the search for laws of ecology is futile. I think the idea that science searches for laws is naive, and certainly some philosophers of science are arguing this too. This means that the methodologies of the natural sciences and the social sciences are probably closer together than some might think, and statistics provides a lot of the links.

    As you can see, this is something I've spent far more time than is healthy thinking about.

    Bob

  7. I think I need to go and re-kuhn, because I always took his work on this to be more of a dissection of the state of affairs and less of a treatise in support of the inferences in science. I've felt that were Lakatos took popper seemed to take popper to a place that I've felt comfortable with, and did so in a way thay I could easily explain to other people who weren't into philosophy of science stuff, a hallmark of a good theory. Moreover, I think it has the self-falsifiable bit to the piece that popper was maybe missing.

    As to the last couple of comments, I like how there's a natural scientist who's showing the dirty shop floor, so to speak. It seems the hard sciences don't necessarily have a grip hold on Truth, and they maybe just as off the money as all us humble social scientists are. All our models are false, but so are theirs! They're all just theories, false theories, but that's what I like about it. There's room to go some place with it. Maybe I'm just too young (I'm still a grad student), but it gives me hope, and it does not make me want to throw my hands up and say "well, we should just say we can't do it, and take a step back and tell little stories of what's happening." I think this is where I disagree with both of the last two posts, I don't think that the search for laws and rules is naive, it's just not real likely that any of us is going to come up with a law with any real lasting power. That all these "laws" may be false as the day is long, doesn't mean that to try to describe stable patterns of behavior, and the motivations for these, and then give rules to the patterns is in vain.

  8. I guess I ought to clarify why I think Bayesian stats fit with Kuhn's ideas, and what I do and don't like about Popper. I think that falsification is a great thing to be able to do, but it doesn't happen much in practice, and doesn't really make sense in theory. When a model doesn't work, we tweak it so it does and then claim that we were right all along. No-one ever really thinks their ideas have been falsified, and so the theories survive precisely as long as their authors can convince others to believe in them. And in any case, I think the probability that the "truth" is expressible in the language of probability theory (or any other language humans can use) is vanishingly small, so we should conclude a priori that all theories are falsified. So both in principle and in practice I don't find falsificationism to be helpful.

    What does happen though (or at least, what I've experienced), is that people propose a range of different theoretical principles which translate into modelling assumptions. Given those theoretical principles, we can learn about what the models should look like. That's, as Andrew says, "normal science" and it's much like Bayesian updating on a fixed parameter space. But sometimes (often when the models aren't performing well), you go back, come up with a new set of theoretical principles and modelling assumptions, and see how those stack up against the old ones. That's a good old fashioned "scientific revolution", and it's much like what happens if you compare the posterior odds of a great model you thought of yesterday against an old one that's got a lot of holes in it.

    I think where Bayes and Kuhn agree is that the conclusions you reach depend on the class of explanations that you originally considered. If all members of the class behave poorly, it's back to the drawing board, and not necessarily a case of saying "well, X is falsified, so logically the next thing to try is Y". To take the classic example, it's not clear to me how falsifying Newtonian physics would naturally and inevitably lead to one positing relativistic physics. That part of the process just doesn't seem to work like Popper said it did. But it fits fine with Kuhn, and isn't a theoretical problem for Bayesian inference: you just throw a new model into your class of explanations, and see what comes out having the best posterior odds.

    Of course, I could be wrong.

  9. Hmm. Just noticed that I could probably have just said: "I agree with rjw". I'm not down on Popper either. I just thing that Kuhn came along and gave us a better model of what happens in practice. I actually reckon that science is subjective (though I'm probably in a minority), in that it's all about what we believe about the world. But that's not the same as saying it's arbitrary. Like rjw says, you can't just make up any old rubbish and call it science. We have standards to maintain.

Comments are closed.