Was Thomas Kuhn evil? I don’t really care.

OK, I guess I care a little . . . but when it comes to philosophy, I don’t really care about Kuhn’s personality or even what exactly he said in his books. I use Kuhn in my work, by which I mean that I use an idealized Kuhn, I take the best from his work (as I see it), the same way I use an idealized Lakatos and Popper, and the same way that Lakatos famously used an idealized Popper (Lakatos called him Popper2, I think it was).

Here’s what Shalizi and I wrote in our article:

We focus on the classical ideas of Popper and Kuhn, partly because of their influence in the general scientific culture and partly because they represent certain attitudes which we believe are important in understanding the dynamic process of statistical modelling.

Actually, we said “modeling,” but someone translated our article into British for publication. Anyway . . . we continue:

The two most famous modern philosophers of science are undoubtedly Karl Popper (1934/1959) and Thomas Kuhn (1970), and if statisticians (like other non-philosophers) know about philosophy of science at all, it is generally some version of their ideas. . . . We do not pretend that our sketch fully portrays these figures, let alone the literatures of exegesis and controversy they inspired, or even how the philosophy of science has moved on since 1970. . . .

To sum up, our views are much closer to Popper’s than to Kuhn’s. The latter encouraged a close attention to the history of science and to explaining the process of scientific change, as well as putting on the agenda many genuinely deep questions, such as when and how scientific fields achieve consensus. There are even analogies between Kuhn’s ideas and what happens in good data-analytic practice. Fundamentally, however, we feel that deductive model checking is central to statistical and scientific progress, and that it is the threat of such checks that motivates us to perform inferences within complex models that we know ahead of time to be false.

My point here is that, as applied statisticians rather than philosophers or historians, we take what we can use from philosophy, being open about our ignorance of most of the literature in that field. Just as applied researchers pick and choose statistical methods in order to design and analyze their data, we statisticians pick and choose philosophical ideas to help us understand what we are doing.

For example, we write:

In some way, Kuhn’s distinction between normal and revolutionary science is analogous to the distinction between learning within a Bayesian model, and checking the model in preparation to discarding or expanding it. Just as the work of normal science proceeds within the presuppositions of the paradigm, updating a posterior distribution by conditioning on new data takes the assumptions embodied in the prior distribution and the likelihood function as unchallengeable truths. Model checking, on the other hand, corresponds to the identification of anomalies, with a switch to a new model when they become intolerable. Even the problems with translations between paradigms have something of a counterpart in statistical practice; for example, the intercept coefficients in a varying-intercept, constant-slope regression model have a somewhat different meaning than do the intercepts in a varying-slope model.

This is all fine, but we recognize:

We do not want to push the analogy too far, however, since most model checking and model reformulation would by Kuhn have been regarded as puzzle-solving within a single paradigm, and his views of how people switch between paradigms are, as we just saw, rather different.

We’re trying to make use of the insights that Kuhn brought to bear, without getting tied up in what Kuhn’s own position was on all this. Kuhnianism without Kuhn, one might say.

Anyway, this all came up because Mark Brown pointed me to this article by John Horgan reporting that Errol Morris thinks that Kuhn was, in Horgan’s words, “a bad person and bad philosopher.”

Errol Morris! He’s my hero. If he hates Kuhn, so do I. Or at least that’s my default position, until further information comes along.

Actually, I do have further information about Kuhn. I can’t say I knew the guy personally, but I did take his course at MIT. Actually, I just came to the first class and dropped it. Hey . . . didn’t I blog this once? Let me check . . . yeah, here it is, from 2011—and I wrote it in response to Errol Morris’s story, the first time I heard about it! I’d forgotten this entirely.

There’s one thing that makes me a little sad. Horgan writes that Morris’s book features “interviews with Noam Chomsky, Steven Weinberg and Hilary Putnam, among other big shots.” I think there must be people with more to say than these guys. This may be a problem that once an author reaches the celebrity stratosphere, he will naturally mingle with other celebrities. If I’m reading a book about philosophy of science, I’d rather see an interview with Steve Stigler, or Josh Miller, or Deborah Mayo, or Cosma Shalizi, or various working scientists with historical and philosophical interests. But it can be hard to find such people, if you’re coming from the outside.

32 thoughts on “Was Thomas Kuhn evil? I don’t really care.

  1. Not that heroes can’t be imperfect, but I lost some respect for Errol Morris after learning about how he chipped in to the whole Elizabeth Holmes/Theranos hype machine.

    So when I hear that he said Kuhn was “a bad philosopher and a bad person,” I take it with a grain of salt.

    Otherwise, amen to the larger point about taking what you need from philosophers and discarding the rest. Not just in statistics, but in life.

    • Chris:

      Errol Morris was connected to Theranos? I had no idea! OK, I went on google and found this article which describes the story behind some TV commercials that Morris did for Theranos. I’m not sure what to think about this. If Morris was doing TV commercials for the money, well . . . I do things for the money too. I could see Morris doing some commercials without feeling the need to fully investigate the company that’s paying for it. The article also criticizes Morris for doing commercials for Nike, a company that uses sweatshop labor. I could imagine working for Nike. I mean, sure, I’d prefer they not use sweatshop labor, but I might just feel that this is not my business. Perhaps this isn’t so ethical on my part.

      Regarding Theranos, I’m much more bothered by David Boies, who didn’t just take the company’s money; according to the book Bad Blood, Boies actively harassed whistleblowers. That seems much worse. Not just guilt by association or poor vetting but direct bad behavior.

      Amusingly, it seems that Malcolm Gladwell, in one of his (pre-Theranos) books, celebrated Boies.

  2. If personalities and politics and personal judgments on the consequences of philosopher’s ideas are to be invoked, the one to be questioned is of course Karl Popper. That’s why it’s so important to rely on an idealized version of Popper. It is not an accident that idealized Popperism is the philosophy of science favored by scientific creationists and Intelligent Design theorists. Without idealization, how else could anyone take seriously a man who pronounced Darwin’s evolutionary theory unscientific? (Students desperately contrived trivial experiments so that evolution could pass muster and Popper could safely retract to redeem his reputation without conceding the flaws in his philosophy of science.)

    And it is common to conveniently forget that Popper was first and foremost a philosopher of science. But the real beginning of his influence in the anglophone world began with The Open Society and its Enemies. Then it continued in the Mont Pelerin Society, a vile outfit of academic thugs. See Mirowski and Plehwe’s The Road from Mont Pelerin for a tediously cautious history of what Popper was all about outside philosophy of science. Popperism I think is deeply implicated in the fetishization of IQ, the Big Five traits of personality, pretty much the whole of evolutionary psychology, human biodiversity movement and, in a mockery of Keynes, practical economists are often the slaves of a defunct Popper. Quite aside from the problem that modifying hypotheses/models is more the Duhem-Quine refutation of Popperism, the notion that attempting to falsify a hypothesis is science, means Michel Gauquelin’s experiment on the Mars effect or the whole of parapsychology is scientific. I suppose people presupposed to revere Jonathan Haidt are apt to fall for this kind of nonsense, everybody likes to be told they’re up-to-date science.

    It’s been so long since I read Kuhn (forty five years!) I forget the details. But my (treacherous?) memory says Kuhn simply presupposed the progress of science, leading him irrelevant to such problems as parapsychology etc. He should have written a sequel, The Structure of Scientific Counterrevolutions, I think. Counterrevolutionary ideologies are every bit as important in science as in the real world. Hence Popper. By the way, Friedrich von Hayek was a philosopher of science too, and his covert influence may well be more important than most realize.

    • Correction: Popper was first and foremost a political philosopher, not a philosopher of science. His attack on historicism was an attack on all the historical sciences, including natural science. It is Popper’s malign influence that demotes climate history, historical reconstructions of plate tectonics, evolution of stars and galaxies, evolutionary history etc. to not quite scientific endeavors. It is not an accident but a goal of Popper’s philosophy of science to create “World 3.” Look it up.

      Cut this stuff out for wordy, then messed up the deletion to turn Popper into primarily a philosopher of science.

    • As a casual reader of Popper, I confess I have little contact with his political writings (I’m searching about the Mont Pelèrin Society right now!).
      But the whole idea that falsificationism somehow allows the bizarre pseudo-science like ESP or astrology seems misguided to me. In the Logic Of Scientific Discovery, the degree of testability of a theory depends on how severily it can be tested, which amounts to its empirical content. ESP and Gauquelin’s experiments are based mostly on the idea that their hypotheses can be ‘severily tested’ by statistical null-nil hypothesis – which is not severe at all and not at all close to Popper’s requirement to falsify a theory. I believe Meehl discusses this difference at length in his ‘two knights’ paper, “Theoretical risks and tabular asterisks”.
      Also, the idea that the Duhem-Quine thesis somehow debunks falsificationism is acceptable only if we consider a completely naïve version of it — Popper_0, in Lakatos writings. Again in the Logic of Scientific Discovery, Popper is very well aware of the problem of deriving ‘basic statements’ from universal (theoretical) statements and some set of initial conditions and its implication to the falsification of the whole system of statements used to derive the testable predictions; or how a single potentially falsifying observation does not mean a theory is throughly falsified.

      • I don’t think anybody holds the “naive” version imputed to them, a claim I suspect to be an anatomical wonder, both a straw man and a red herring. For the first, again, surely Popper himself didn’t have a simplistic naive understanding of himself, yet the dude actually had to be talked down from denying Darwin was doing science. For the second, the notion that science is controlled experiments whose failed predictions refute a hypothesis does mislead. A good deal of science may be dismissed as merely “descriptive,” but much of the science in science is about measuring things correctly, ensuring they really are things for a start and ensuring we know how correctly we measure. I think things like IQ, personality traits, Jonathan Haidt’s shtick, a whole lot of evolutionary psychology get a pass because, hey, falsifiable hypotheses! A study of power stance, to grab a random example, can be well-designed in a Popperian sense to falsify the hypothesis..but the problem that such a trivial common event having such drastic effects is intrinsically unlikely. The notion that statistical controls and hypothesis testing simply is science, regardless, really does trace back I think to Popper…even the non-naive versions.

        Or try it another way…read The Poverty of Historicism keeping in mind there is such a thing as a historical natural science.

        • The idea of a naïve version of falsificationism is not Popper’s own invention, but a common reconstruction of his ideas, wrongly attributed to him, specially by critics. It’s so common that Lakatos decide to name it Popper_0, as a way to clearly distinguish between his own positions and those attributed to him by his critics.

          I do understand how the focus on falsificationism might dismiss all descriptive efforts in science. But I would say that Popper wouldn’t deny that collecting data, refining measures and organizing things are important to science (as a context of discovery), but he is interested in how scientists proceed to eliminate error (as a context of justification). In this case, it is the lack of error elimination that allows for thing such as IQ, big five, all questionable research in social psychology to thrive – but because the scientist are not really proposing falsifiable hypothesis, but blindly rejecting a strawman null using questionable statistical practices – not at all what Popper proposed, and a point that Meehl draws beatifully in the paper I mentioned earlier. Popper also had reservations with a probabilistic prediction, so I doubt he would endorse the usual use of NHST in most psychological science.

        • Popper’s problems with probabilistic prediction are long forgotten by me, if I ever grasped them. But I can say that probabilistic predictions are entailed with both error measurement and natural variation in groups (which maybe is more a biology thing but is science.)

          Real science demoted to a “context of discovery” sorry seems more like a politer form of dismissal. I still think Popper’s wrong both on the description of science as it is and in this aspect, even of science as it should be.

          The null hypothesis as effects due to random chance is, I agree, not sufficient. But I do not see how the null hypothesis is the problem with IQ testing, where the fundamental issues are defining intelligence and how to measure it. Carping about null hypothesis testing to check whether IQ tests are statistically valid merely ignores the question of what IQ tests actually measure. The notion that they don’t measure anything is irreconcilable with the statistically significant. The IQ tests keep coming up with a set of systematic correlations but this is merely noise? (I think they measure acculturation, previous education and individual achievement at academics, the latter of which is both a measure of social integration and individual facility with the various subjects being tested.)

  3. Huh, and in that earlier blog post I said something about a saber saw being different from a reciprocating saw. I thought a saber saw was like a jigsaw and a reciprocating saw is like a sawzall. Of course I am aware they both use a reciprocating motion but I thought that in common parlance this was the distinction. But I just checked and no, I was wrong, a saber saw is a reciprocating saw like a sawzall. My mistake.

    I realize this has nothing to do with this post, but comments on that earlier post are closed so I’ve gotta correct myself here.

  4. I keep saying this, but Popper is popular outside of philosophy because his attacks on communism made him a useful figure in the Cold War for various figures on the right like Thatcher. But, Popper is wrong about science. He believed that he had solved Hume’s problem with induction. He argued that there really is no inductive inference, there is only deduction and that we can demarcate science from pseudoscience on the basis that scientific claims can be falsified. Popper thought that scientific laws were universal, and could be falsified by finding an instance of where the rule did not apply. This is false as Carnap proved a long time ago. Any scientific law with nested quantifiers (almost all of them) is not falsifiable in the same sense that no scientific law can be verified in Hume’s sense. Newton’s first law, “every object will remain at rest or in uniform motion in a straight line unless compelled to change its state by the action of an external force” can’t be falsified in the sense the Popper requires. If I see an object that is at rest and then it starts to move, but I cannot identify the external force, I don’t rush out to publish an article that Newton got it wrong. That is because the law is of the form “For all x if Fx, there is a y such that Gyx.” To falsify such laws deductively, one premise will have to be “There is a an x such that Fx and There is no y such that Gyx.” But, “There is no y such that Gyx” is just “All y are such that not Gyx.” That is, I have to confirm a universal to “falsify” a law with nested quantifiers and we are right back to Hume’s problem with induction. Induction is not deduction. Popper is simply wrong. And his popularity with scientists continues to do great harm because his followers believe that there is a hard distinction between science and pseudoscience marked by “falsifiable” claims when there is no such distinction.

    • The Cold War was about class, on an international stage. Popper was useful for class war, period. Even though the USSR is gone, the owners never stop the class war, because there’s never enough. So Popper is going to stay with us, regardless.

    • I was not aware of this objection to falsificationism by Carnap. As a layman, I am probably reading too little into it, but it seems to be an objection on the plane of pure logic that does not really address Popper’s idea. In the realm of logic, to modus-tollens-falsify a conditional statement with a nested existential quantifier, we need another universal statement. But this assumes that the statement is falsifiable by itself, without further statements. If those statements belong to a scientific theory, the statement is not in isolation anymore. Popper makes it very clear in the Logic of Scientific Discovery that ‘singular existential statements’ — those we test by experiments — should be derived from a set of universal statements (a theory) and a set of singular statements (the initial conditions).

      • I don’t think that I understand your point. It is an “objection on the plane of pure logic.” Popper says that scientific inference is deductive (i.e. “pure logic”) not Carnap. He thinks that scientific laws are universals. We cannot confirm a universal (Hume), but we can through deduction falsify the universal by finding an instance where the universal if false. I just have to find one black swan to show that “All swans are white” is false. So far, so good. But, it is a trivial task to find laws that have nested quantifiers of a form that cannot be refuted by finding one instance. The deduction that the law is false requires a universal as one of the premises. Thus, scientific inference cannot depend upon deduction alone. In my example of Newton’s first law, you cannot in principle falsify the law. If I find an instance of an object moving, I have to also confirm that there is no force acting on it. But, that involves confirming a universal. That requires induction (affirming universals). The “objection” is fatal to Popper’s philosophy of science.
        You say, “Popper makes very clear . . . that singular existential statements — those we test by experiments — should be derived from a set of universal statements.” Yes, he is clear about it. And, wrong. Scientific claims are not simple universals, and we don’t test singular existential statements in experiments. There are a variety of scientific claims of various logical forms. Think of any famous experiment, and you will be hard pressed to put the claim that is being tested into a singular existential form. Galileo rolling balls down a track, Michelson-Morley, the search for the Higgs boson.

        • What I mean by “objection on the plane of pure logic” is that the nested quantifier counterexample to falsification only works against an idealized version of falsificationism — one in which a single statement should be falsifiable by itself. This is not Popper’s position: a single statement is not enough to derive predictions, so it is not falsifiable by itself. We need at least a set of existential statements about the initial conditions, and then we can derive their consequences as observable — and it is this consequence that is subject to the modus tollens way of falsifying.

        • The number of statements is irrelevant. Say there is 100 statement in the deduction. As long as one the consequence is of the form ofhaving nested quantifiers, then it requires another universal as a premise to refute the consequence. I know what Popper said. He said that scientific inference is deductive. You said “it is this consequence that is subject to the modus tollens way of falsifying.” Modus tollens.
          If p then q
          Not q
          Therefore, not p
          If p is of the form “it is not the case that there is a y such for all x, Gyx,” then p contains a universal. We cannot deduce universals. Certainly, there are all kind of scientific claims with nested quantifiers. For falsificationism to work, “not p” in the modus tollens can never be of a form with a nested universal or a negative existential (which is a universal). That is just not a correct view of acceptable scientific claims. Many have nested quantifiers. Just try it with an example of a real scientific inference. How can you refute Newton’s law or evolution (Popper didn’t think evolution was a scientific theory), or that the Higgs exists or that the dinosaurs went extinct because of a comet impact. Show me the deduction and what the falsification would be. In each case, you will see that you need something like “everywhere we looked, we didn’t find evidence that x and y are related in the way specified.” From that you will have to infer that no xs are ys. But, that is an inference to a universal –induction.

        • OK, I think we are getting somewhere. Since we need many statements to deduce a singular existential statement, the correct form of the modus tollens is this:

          1. (p ^ q ^ s) -> e
          2. ~ e
          .: ~p v ~q v ~s

          The first statement is the set of universal statements (taken from a scientific theory) plus some initial conditions. ‘e’ is a singular existential statement which will be probed by some experiment or systematic observation. Any of those universal statements could be the nested quantifier statement from the scientific theory. If the singular existential statement turns out to be false, the whole set of universal statements are potentially falsified. In many cases, the scientist has a good idea which statement is the most probable responsible for the falsifying observation (if it’s a new hypothesis in a otherwise stablished theory, e.g.). But sometimes they might not have a good idea, or prefer to blame an auxiliary hypothesis, too.

          Now, I’m not a physicist, but I’ll bite the bait on the Higgs boson (based on a mostly accessible reading about it here: https://wwwf.imperial.ac.uk/~dvandyk/Research/14-reviews-higgs.pdf). In the Standard Model, physicist have pretty good idea about the model for the particle counts due to some theorized decay pathways. A single Higgs boson, in particular, should decay to two photons or two Z bosons. In the absence of a Higgs Boson, the observed counts should follow a predicted rate; if the Higgs Boson exists, this rate should be higher at a given energy level. So, if we accelerate some particles in a particle collider, physicist can build a singular existential statement about the observable detection of particles. In fact, they proceed exactly as Popper suggests: they derive which kind of observation would be compatible with a model in which the Higgs Bosons does not exist. In this case, the Standard Model was corroborated in this case due to an excess number of counts.

        • Compulsive nitpicking, sorry, but it’s a change of state of motion that Newton attributes to a force. Finding a change in state of motion without a force acting would falsify Newton’s first law. Einstein’s picture of gravity not as a classical efficient force but a warping of space-time (and the very notion of space-time, where Newton assumed an absolute time—this was objected to in his own time, if I understand correctly—also contradicts Newton.) But perhaps this is irrelevant, as there is universal agreement that Einstein is wrong. The thing about that, in the context of Popper, is that Einstein’s general relativity has not been falsified by experiment! Nonetheless, no respected physicist accepts general relativity as a description of the how things works. The opposite of this, where scientists universally reject a hypothesis despite empirical evidence with no good explanation, is of course Wegener’s continental drift.

          Kaszubowski’s idea that science is about making predictions that are disconfirmed by experiment and then “sets of statements” are falsified does come from Popper. I think this is amazingly false as a description of real science as it ignores description/measurement and so on. It may be formally true that “sets of statements” are required to make predictions which experiment can falsify. The thing is, a set of statements is not a simple thing, and modifications in the set raise the kind of objections raised by Duhem and Quine.

        • But perhaps this is irrelevant, as there is universal agreement that Einstein is wrong. The thing about that, in the context of Popper, is that Einstein’s general relativity has not been falsified by experiment! N

          Where are you seeing this universal agreement that General Relativity is wrong? I see an almost religious belief that the opposite is the case.

          And GR was falsified in the 1930s with the discovery that galactic rotation curves deviated from what was predicted. They attempted an ad hoc “save” with dark matter but after nearly 100 years the only evidence for dark matter remains that GR predicts the wrong thing.

        • The universal agreement is that GR will be replaced by quantum gravity, and quantum field theory, or superstrings, or loop quantum gravity or maybe the theoretical demonstration that space-time is an emergent phenomenon. GR is held to be incorrect because it predicts loss of information in black holes, because it breaks down when space-time curvature and energy density are infinite and because it is mathematically incompatible with Quantum Field Theory/Quantum Mechanics.

          Since GR predicts galactic rotation curves by using parameters, which include the amount of mass. The assumption that the initial predictions is the assumption that the amount of matter was correctly measured. This to me seems to be both naive and Popperian, which, again, skips over the real science in measuring. If you want to phrase it as a matter of one of a set of statements being adjusted, it seems to me that we are once again, right back at the Duhem-Quine objections to Popper. Also, other alternative hypotheses such as MOND (modified Newtonian dynamics) have difficulties in “predicting” other phenomena, not least of which it is difficult to even say what set of statements comprise it. Also, see: “Bullet Cluster.”

          But suppose my basic physics is wrong, as it may be? The assumption that although GR has passed every other test, such as precession of Mercury, gravitational lensing, slowing of pulsar emissions and with LIGO, direct detection of gravitational waves, the single test of galactic rotation curves falsifies GR is a very nice demonstration the naive version of Popperian falsification is alive and well.

        • The universal agreement is that GR will be replaced by quantum gravity, and quantum field theory, or superstrings, or loop quantum gravity or maybe the theoretical demonstration that space-time is an emergent phenomenon. GR is held to be incorrect because it predicts loss of information in black holes, because it breaks down when space-time curvature and energy density are infinite and because it is mathematically incompatible with Quantum Field Theory/Quantum Mechanics.

          Where are you seeing this universal agreement?

          And I do not believe you can falsify a theory, a theory is adopted because someone derives otherwise surprising predictions from it that turn out to be accurate (and so has a high posterior probability). At this point, GR is the most obvious example of a degenerating research programme riddled with post-hoc adjustments I have ever seen though. When Einstein derived it no one even distinguished between galaxies and nebula, it works fine for the solar system though.

  5. “If I’m reading a book about philosophy of science, I’d rather see an interview [with] various working scientists with historical and philosophical interests.” I’m going to plug the work my own PhD advisor, Dave Kaiser at MIT, who’s indeed both a physicist and a historian and who has written some stuff about Kuhn that you & your blog’s readers might find interesting. So just the droid you’re looking for.

    –Here’s a short piece on “The Structure of Scientific Revolutions” in Nature: https://www.nature.com/articles/484164a
    –Here’s a longer but deeply fascinating article about Kuhn’s intellectual exchanges with his contemporary psychological researchers: https://dspace.mit.edu/handle/1721.1/108041

    The latter is his contribution to a 2016 volume of essays on Kuhn’s Structure at 50″: https://www.press.uchicago.edu/ucp/books/book/chicago/K/bo22053950.html

    A room full of science journalists once asked Dave what he thought about Structure. People ask historians of science all the time “have you ever heard of Thomas Kuhn?”, which is sort of like asking a biologist if they’ve ever heard of Louis Pasteur. Anyway, I steal Dave’s answer for all my classes now because it’s so true: that the book is wonderful because it asks all the right questions and arrives at all the wrong answers. And it’s true that every time I reread the book, I’m struck by how prescient and insightful so many of his analyses are (especially the ones Andrew pointed to about how different disciplines arrive at consensus) but also how constrained his answers are by his 1950s-physics-and-chemistry model of science.

  6. Alternative Philosophical Interpretation of Gelman’s Statistical Practice: Multi-Level Modeling as Reification of a Quinian Knowledge Field for Minimal Adjustment Cost

    Proposition: Multi-level modeling is Quine made into numbers (and hence made dumber and more useful). In this view, priors on multi-level models across multiple-outcomes are just quantifications of the spider threads holding together the web of scientific knowledge. Model checking, then, leads to a calibrated model that minimizes how much fiddling is necessary around the boundary so as to keep science from blatantly disagreeing with experience. In this interpretation, model checking is not a process of falsification but instead a reification of the reigning paradigm at the lowest possible cost (requiring the least amount of adjustment to current beliefs).

    Anyway the overall point is: why did people stop reading Quine? You get most of what you get in Popper and Kuhn, but you get it without the arrogance of Popper and without the relativism of Kuhn.

    From “Two Dogmas of Empiricism”: https://www.theologie.uzh.ch/dam/jcr:ffffffff-fbd6-1538-0000-000070cf64bc/Quine51.pdf

    The totality of our so-called knowledge or beliefs, from the most casual matters of geography and history to the profoundest laws of atomic physics or even of pure mathematics and logic, is a manmade fabric which impinges on experience only along the edges. Or, to change the figure, total science is like a field of force whose boundary conditions are experience. A conflict with experience at the periphery occasions readjustments in the interior of the field. Truth values have to be redistributed over some of our statements. Re-evaluation of some statements entails re-evaluation of others, because of their logical interconnections — the logical laws being in turn simply certain further statements of the system, certain further elements of the field. Having re-evaluated one statement we must re-evaluate some others, whether they be statements logically connected with the first or whether they be the statements of logical connections themselves. But the total field is so undetermined by its boundary conditions, experience, that there is much latitude of choice as to what statements to re-evaluate in the light of any single contrary experience.

    • Quine, with that one paper, drove a stake through the heart of the logical positivist program that had dominated philosophy of science through the first half of the twentieth century. Now that I think of it, that must’ve been why my undergrad analytic philosophy class was titled, “Analytic Philosophy 1900–1945. ” I only came to appreciate Quine later while co-teaching a philosophy of language course.

      • Yeah Quine. Quine figures out that the implication of Godel and Tarski’s results are that we cannot draw a clear distinction between the analytic and synthetic and thus cannot draw the necessary distinction between data and theory upon which the standard model of empiricism depends. This has lead to confusions like Kuhn’s that we can have “incommensurable” world views. We can’t. The fact that my translation of your language into mine is indeterminate does not make it incommensurable because I cannot say anything about your beliefs without first having a translation. This is the real confusion in Kuhn and others. How could I ever know that your views were incommensurable with mine without a translation. And, the only way to get a translation is to treat your utterances as referential and referring to objects to which I also refer. Thus, incommensurability is as impossible as a determinate translation.

      • The best criticisms of positivism were made by the positivists themselves, and positivism did not die because of Quine’s “Two Dogmas” paper. That paper is very thought-provoking, to be sure, but its arguments are by no means decisive.

  7. I don’t care either.

    Now if a company has behaved badly and I buy their product I am supporting bad behavior and maybe enabling more.

    If an academic has behaved badly (though had proper scientific motivation and conduct in their work) and I draw on their insight and maybe share that with others, there is little reason to think I am supporting their bad extra-scientific behavior and none if they are deceased.

    Most scholars of CS Peirce mention how badly he acted in his personal life – so what?

  8. Andrew,

    Whenever you mention Lakatos’s relationship with Popper, I am contractually obligated to mention that he was also openly building on the writings of another 20th Century thinker, particularly with “Proofs and Refutations,” his use of heuristics, and in one case his work as a translator.

    This suggests that Lakatos’s opinion of “How to Solve It” was a bit higher than yours.

  9. Yeah, this post is a year old but at least somewhere in the comment section one has to mention Ludwik Fleck (1935). If one mentions Kuhn (1970) then Fleck is a must since many of Kuhn’s ideas appeared in Fleck first and Fleck is imho the much better and concise author.

Leave a Reply to David Singerman Cancel reply

Your email address will not be published. Required fields are marked *