Skip to content
 

Putting Megan Higgs and Thomas Basbøll in the room together

OK, the’re both on the blogroll so maybe they already know about each other.

But, just in case . . . here are two recent posts:

Higgs, Fact detector? It is not.:

Let’s assume that most people see science as the process of collecting more and more facts (where facts are taken as evidence of knowledge). I find this a realistic assumption because of how science is typically presented, taught, and discussed. I wholeheartedly believe it is more about understanding ignorance than collecting facts, but even then have found myself accidentally reinforcing this view with my own kids at times. . . . I also see too much emphasis from scientists on fact-finding and fact-reporting and adherence to expectations for this in dissemination of work. . . .

This is where my views on use of Statistics come in. I see statistical methods often used in a way that reinforces, and even further contributes to, a fact-centered way of operating in science. The common (and wrong) explanation for what some statistical methods provide is that of a litmus test for whether observed effects “are real or not.” What does “real” mean? I can’t help but interpreting it as a reflection of the view of Statistics as a convenient fact-finding machine. . . .

Statistical inferences are, and should be, complex and uncomfortable, not simple and comforting. Statistical inference is about inferring based on combining data and probability models, not about judging whether an experimental or observational result should be taken as a fact. There is no “determining” and no “answers” and no distinguishing “real” from “not real” — even though this language is common in scientific reporting. It is not helping science to keep pretending as if Statistics can detect facts.

Basbøll, The Gradual Formation of Knowledge in Discourse:

Knowledge is indeed a “state of mind”, i.e., “justified, true belief”, but that state should also always be thought of as a “stance”, a practical orientation in a social context. . . . Discourse is made up of gradual, ongoing processes. And they are supported by a whole array of practices, from the very local practices of the college classroom, to the very global practices of the published literature. . . .

We seem to have grown impatient with thinking. We might also say that we have too much blind trust in science. We no longer try to get our minds around difficult ideas. Instead, we imagine that “the facts are known” and that an expert somewhere knows those facts. All we have to do is listen and believe. It is the role of the scientist to confidently assert, not to “think out loud”. We’re unwilling to entertain a tentative formulation. . . .

To propose to subject a fact to further “thinking” (“after the fact,” as it were) is considered either quaint or rude, and in some cases outright dangerous.

Related: the problem with statistics as a tool for “uncertainty laundering.”

Related: Lots of examples of academic/media authority figures attacking as quaint or rude any questioning after the fact, as it were. More generally, lots of resistance to the idea that published and celebrated claims are not actually true facts.

57 Comments

  1. Megan Higgs says:

    Andrew — Thanks for putting us in room together! Can’t wait to read more from Inframethodology.

  2. Terry says:

    I agree that science should include an ongoing dialog that grapples with difficult and messy ideas and acknowledges those difficulties in an ongoing evolving dialog. We need a class of very subtle people with a willingness to grapple with messy uncertainty to do this difficult work.

    But, when a schmo walks into a car repair shop and asks “will my car blow up when I drive it”, what answer should we give that schmo?

    • Andrew says:

      Terry:

      I guess we should give the schmo some answer like: If you don’t replace the cap on your oil tank, your car is likely to catch on fire!

      More generally, I think your point is that sometimes we can make strong claims and learn facts. And I agree with you there. Maybe we spend so much time thinking about scientific uncertainty because these are the settings where statistics can make more of a difference. There are lots of problems where we have near-certainty so we don’t need to think hard about statistics at all.

      • For the most part, we don’t do research in areas where we have strong well established facts, at least until they start to look shaky due to some new evidence.

      • jim says:

        This is an issue of validation.

        We can accurately inform Schmo what will happen to his car only to the extent that we have validated our claims with prior experience.

        If Schmo smokes cigars and his gas tank is an open 20 gallon tub in the back seat, we should tell Schmo his car has a very good chance of exploding. Same if he has a ’74 Pinto. :)

        If Schmo has a 2016 Honda Pilot with 40,00 miles on it and no problems, we can tell him that the chances of his car exploding are exceedingly small. We know this because there are millions of 2016 Pilots driving around and very few explosions associated with them. IOW, the “explosion safety” of 2016 Pilot is validated with millions of tests

        • Jonathan (another one) says:

          “Reports range from 27 to 180 deaths as a result of rear-impact-related fuel tank fires in the Pinto, but given the volume of more than 2.2 million vehicles sold, the death rate was not substantially different from that of vehicles by Ford’s competitors.”
          https://www.popularmechanics.com/cars/a6700/top-automotive-engineering-failures-ford-pinto-fuel-tanks/

          I’m not sure how exactly to take that figure relative to other cars, but the absolute probability was pretty low. (Compared to what, I hear you say…)

          • jim says:

            ha, that’s hilarious. I remember it when I was a kid, it was a huge deal, but now we find out it wasn’t even real, just a consumer advocacy conspiracy theory!

            • Terry says:

              I vaguely remember reading the Pinto was chosen as the target of criticism because it was a Detroit car and that the worst cars were foreign.

              Did a little checking on Wikipedia and found:

              UCLA law professor Gary T. Schwartz, in a Rutgers Law Review article (see Section 7.3 NHTSA Investigation above), studied the fatality rates of the Pinto and several other small cars of the time period. He noted that fires, and rear-end fires in particular, are very small portion of overall auto fatalities. At the time only 1% of automobile crashes would result in fire and only 4% of fatal accidents involved fire, and only 15% of fatal fire crashes are the result of rear-end collisions.[136] When considering the overall safety of the Pinto, Schwartz notes that subcompact cars as a class have a generally higher fatality risk. Pintos represented 1.9% of all cars on the road in the 1975–76 period. During that time the car represented 1.9% of all “fatal accidents accompanied by some fire.” Implying the car was average for all cars and slightly above average for its class.[137] When all types of fatalities are considered, the Pinto was approximately even with the AMC Gremlin, Chevrolet Vega, and Datsun 510. It was significantly better than the Datsun 1200/210, Toyota Corolla and VW Beetle.[136] The safety record of the car in terms of fire was average or slightly below average for compacts, and all cars respectively. This was considered respectable for a subcompact car. Only when considering the narrow subset of rear-impact, fire fatalities for the car were somewhat worse than the average for subcompact cars. While acknowledging this is an important legal point, Schwartz rejects the portrayal of the car as a firetrap.[138]

            • Terry says:

              I think “hoax” is a better term than “conspiracy theory”.

              A hoax is one party putting out something that credulous people will spread thinking it is true. Hoaxes are especially easy when people want to hear confirmation of a particular pre-conception. Hence the durability of the Michael Brown and Trayvon Martin hoaxes.

              But a conspiracy theory requires agreement among a large number of people. That is much harder to do. It does happen, though, when a number of people profit from the same dishonesty, such as the numerous conspiracies to rig the tax code and other legislation.

              • jim says:

                Or just a lie that the media soaked up like a desiccated frog soaking up water?

                Going on to the topic of lying, what constitutes a “lie”?

                Is a “lie” restricted to saying something false? Or is it also a lie to claim to believe something that’s almost certainly not true? Or is it also lie to make an inference that misrepresents the truth? Or is it also a lie to simply leave out relevant facts? Or is it also lying to propagate a story without establishing the actual facts?

              • Terry says:

                jim:

                I’ve actually thought a lot about what should be called a “lie”. A lie requires knowledge of untruth, which is often very hard to know.

                I prefer to just call these things “dishonest” where “dishonest” includes reckless indifference to the truth.

              • jim says:

                Terry says: ‘I’ve actually thought a lot about what should be called a “lie”’

                excellent, I really wish there was a lot more discussion of that.

                I think there’s certain amount of cherry picking information that’s outright lying. I mean, suppose an academic economist writes a column arguing the minimum wage (whichever way). IMO they are obliged to represent the entire spectrum of generally accepted evidence insofar as space limitations allow, not just the evidence that supports their point of view. If there is a prominent paper with evidence opposing their position it is, in my view, a lie to simply pretend that paper doesn’t exist. To be telling the truth, they must at the minimum acknowledge and dismiss the paper’s conclusions.

                To put it more simply, it would be an outright lie for an academic economist to insinuate or imply in any way that the matter is settled among academic economists.

                I like your definition, but IMO even by that restricted definition, a lot of people are claiming, implying and insinuating things that are patently false.

              • Martha (Smith) says:

                Jim said, “I think there’s certain amount of cherry picking information that’s outright lying.”

                Agreed.

              • digithead says:

                A hoax is untrue so the Trayvon Martin and Michael Brown cases are not hoaxes. It’s an uncontested fact that both men were shot to death. Whether those shootings were justified or racist profiling is a matter of debate, but not a hoax.

              • jim says:

                “hands up don’t shoot” is the hoax

  3. Matt Skaggs says:

    “Statistical inference is about inferring based on combining data and probability models, not about judging whether an experimental or observational result should be taken as a fact. There is no ‘determining’ and no ‘answers’ and no distinguishing “real” from “not real.”

    I’m curious about something. Suppose NASA announced that it is sending a probe to Pluto. Would you say, “Wow! There they go treating their experimental results and model outputs as facts. I bet they miss by a million miles and that probe goes flying off in a random direction!”

    In simple English, NASA performs experiments and runs models, then it uses the results to “determine” facts, and then it uses those facts to “determine” the velocity and trajectory (“answers”) that will allow it to execute a very precise mission that would be otherwise impossible.

    • Martha (Smith) says:

      The way I’d say it is:

      NASA performs experiments and runs models, then they use the results to try to find facts, and then they use the tentative conclusions to try to determine the velocity and trajectory (“answers”) that will hopefully allow them to execute a very precise mission that has a better chance of success than if they had not carefully devised the experiments, carefully carried out the experiments, and carefully applied the models.

      • jim says:

        The way *I* would say it is:

        “NASA performs experiments and runs models…”

        INSERT: ***WHEN THE MODELS HAVE BEEN CHECKED, DOUBLE CHECKED, TESTED AND VALIDATED TO A PARTICULAR LEVEL OF PRECISION AND/OR ACCURACY**

        “they they use the results to try to find facts..”

        Everyone keeps skipping the only step that matters: validation. A model is just a toy until it has been tested and validated.

    • If NASA just launched a probe to Pluto they’d miss by a million miles no problem. They *constantly* perturb that probe’s velocity through feedback in order to hit Pluto. In their mathematical calculation at initial launch, it’s a “fact” that a velocity in direction (1.20239771,2.47577114,9.194776902) in some units is going to make them get to Pluto. It’s not true, but it’s the precise answer that comes out of their calculations.

    • jim says:

      Matt: Validation.

      NASA’s models are built on principles that have been tested validated many millions of times over, then the models themselves are tested and validated again many times.

      Contrast to many statistical inferences in social and increasingly in medical science, which are announced as discoveries without a single validation.

      Models are great. And when they have been repeatedly tested and validated, they can be used as an investigative tool – but the results of those investigations should also be repeatedly tested and validated. that’s how we come to understand them as fact.

  4. Garnett says:

    It seems that this largely due to the funding mechanisms. In my two decades of biomedical research I’ve never seen anyone get funding from NIH, DOD, or the VA to do anything other than data collection. It’s no surprise that most scientists, at least in this field, are strict empiricists.

  5. Dale Lehman says:

    These are beautiful statements about what statistics and science should be. But my concern is that the more we admit the realities of uncertainty, the more that feeds a skepticism that there are no facts, or that alternative facts are whatever you want them to be. While skepticism is healthy, only a healthy kind of skepticism is. Skepticism that permits all evidence to be discarded if you don’t like what it says – is not healthy. I am very afraid that we are not making progress in that larger battle.

    • Andrew says:

      Dale:

      I agree. This is the Gresham’s law problem that we’ve been discussing. (See here for another example.) I’m not sure what can be done about this.

    • jim says:

      Dale Lehman says: “But my concern is that [this] feeds a skepticism that there are no facts”

      As modern statistical inference in social and health science is practiced, the truth is that there are very few facts – and an extraordinary array of wild claims.

      The key to all of these problems can be summarized in one word: validation.

      That’s the problem with much modern statistical inference (and broader science in general). If you conduct a research project and conclude that saying nice things about statisticians makes people feel less annoyed by and more accepting of them, great! But that’s not the foundation for a $1B in policy expenditures. Its a new hypothesis that must be tested many times before it can be treated as fact.

    • Megan Higgs says:

      I agree that promoting healthy skepticism is incredibly important. I also agree we have a long way to go to get there.

      I have had people (who know how much I love science and research) say “Aren’t you worried that sharing your experiences and ideas will just fuel anti-science sentiments and lead people to feel justified ignoring results?” I admit — yes, I have worried about that. But, I don’t think remaining silent will at all help the situation and may make it worse.

      I think those who do not label themselves as scientists are far better at detecting B.S. than those who do label themselves as scientists give them credit for. They may not be able to put words to why their B.S. detector is sounding alarm, but they feel it. And that leads to mistrust and discarding of information that probably should be trusted. I think healthy skepticism has to come from a foundation of trust, so that is where we need to start. We have to build the trust back up — even if things we did to erode it were not intentional (like over-selling single-study results associated with small p-values as facts.) We have to discuss our methods, and problems with them, openly. We need to work to improve them — and not pretend that science is easier and more certain than it is. Pretending as if there isn’t room or need for improvement will only further erode trust and lead to more unhealthy skepticism. That’s my opinion anyway…

      • Andrew says:

        Megan:

        I often seem to encounter a particular sort of boneheaded overconfidence fueled by statistics, for example that pundit who was giving Hillary Clinton a 98% chance of winning the 2016 election, or the various economists and economics-hangers-on who fall for every new regression discontinuity study that comes down the pike . . .

        When I see these, I’m reminded of Bertrand Russell’s line, “This is one of those views which are so absurd that only very learned men could possibly adopt them.” To paraphrase Hammerstein, you’ve got to be carefully taught to be so credulous. But once that credulity’s there, it can be hard to shake.

      • yyw says:

        This cannot be over-emphasized. The way to earn public trust is to be utterly honest about what you know and what you don’t know. Emphasize not hide uncertainty. As Feynman said 45 years ago, “I would like to add something that’s not essential to the science, but something I kind of believe, which is that you should not fool the layman when you’re talking as a scientist. … I’m talking about a specific, extra type of integrity that is not lying, but bending over backwards to show how you’re maybe wrong, that you ought to do when acting as a scientist. And this is our responsibility as scientists, certainly to other scientists, and I think to laymen.”

      • jd says:

        “I think those who do not label themselves as scientists are far better at detecting B.S. than those who do label themselves as scientists give them credit for.”

        Yes! I think I have mentioned this on blog posts that have featured studies that seem ridiculous simply upon reading the title.

        Along a similar line but slightly off topic – when doing analysis, I have recently started asking for feedback from smart people on my team who don’t really know much of anything about statistics or the research in question. Their responses have been extremely helpful and insightful. I believe this is because they are forced to think and reason about the problem rather than having an automatic learned response that they can easily regurgitate.

  6. Anonymous says:

    There is no “determining” and no “answers” and no distinguishing “real” from “not real” — even though this language is common in scientific reporting. It is not helping science to keep pretending as if Statistics can detect facts.

    Two hundred years ago Laplace used statistical methods to determine the mass of Saturn: “I find that it is a bet of 11,000 against one that the error of this result is not 1/100 of its value”

    Laplace was right given our current value for the mass of Saturn. That would seem to be an example of statistics leading to facts.

    Laplace also used statistics to estimate the parameters related to the orbital motion of Jupiter and Saturn well enough to determine a near resonance in their orbital periods. Observations indicated Jupiter was getting closer to the Sun, which if continued, would destroy the earth. With the near resonance, Laplace predicted that Jupiter would reverse (which it has) and the earth would be saved.

    That would seem to be an example of distinguishing “real” (earth saved) from “not real” (earth destroyed).

    It’s understandable why people talk like this though. Statistics as understood for the past century has been a disaster. Whenever your methods that don’t really work, you have to add in a hefty dose of “use your intuition to fudge things”. Your then start emphasize the “art” of the subject and make your language far fuzzier.

    A similar thing happens in Engineering fields if they outrun it’s foundations in Science and have to use barely workable empirical kludges. Astrologers and fortune tellers do the same thing.

    • Andrew says:

      Anon:

      Indeed. Statistics has given us lots of facts, for example that babies are more likely to be boys than girls, and in enumerating the settings where it’s better to go for it on fourth down, and all sorts of other things. Perhaps the problem could be better stated that many people see statistics, not just a way to sometimes nail down truths, but as a method for routine discovery. For example that someone can dream up a medical or social intervention, then try it out in a randomized experiment, and then learn something important . . . I don’t think this will typically go so well without strong measurement and strong theory, but in statistics textbooks the idea of inference from experiments is typically presented without reference to measurement or theory.

    • jim says:

      Anonymous says: “Two hundred years ago Laplace used statistical methods to determine the mass of Saturn”

      I could do that too. Or I could have my stoner neighbor do it, or my other neighbor’s three year old. But no one would believe it and justifiably so. It doesn’t not matter who did what or what method by which they done it.

      All that matters is that the result is validated by many different methods.

      VALIDATE

      VALIDATE

      VALIDATE

      Anyone can claim anything and Trump does that every day. If you want your claim to be accepted as fact, it has to be…..

    • Megan Higgs says:

      Andrew’s response helps explain the context of the quote and the reason I wrote that sentence. I think we need to be talking about and acknowledging the limitations of inferences based on statistical methods – and not pretending as if give us more than they do in terms of justifying “discoveries”. There are plenty of examples of statistics (numbers calculated from data) and Statistics (theory, methods, models, inferences, etc.) being used to help in the development of something we now take as a fact or distinguishing ‘real’ from ‘not real.’ I am not anti-Statistics, I’m just pro- understanding its limitations and not over-selling it.

      My experiences as a statistician and reading reports about research have convinced me that statistical methods are often used — even in the context of a single study — as if they (almost alone) are capable of detecting whether something should be declared a fact or an observed effect should be declared “real.” Statistical methods and their results (inferences!) are routinely discussed as if they are (alone) capable of magic that they are not capable of — and the fact-based language often gets worse the further the results move into the general media. While this happens in many disciplines — it seems easiest to see and wrap our heads around the problem in human health related research.

      • I think the statement you made: “Statistical inferences are, and should be, complex and uncomfortable, not simple and comforting. Statistical inference is about inferring based on combining data and probability models, not about judging whether an experimental or observational result should be taken as a fact. There is no “determining” and no “answers” and no distinguishing “real” from “not real” — even though this language is common in scientific reporting. It is not helping science to keep pretending as if Statistics can detect facts.”

        works better if it were railing against the *automatic* determination of facts. People think of statistics as like a magic 8 ball… put in reams of printouts of numbers, ask it a yes or no question, then press a button… if the light on top lights up then the answer is YES. If the light stays off.. the answer is NO.

        But statistics, particularly Bayesian, isn’t a magic 8 ball, it’s a way to quantify how much information you have about what will happen. Laplace found out what the mass of Saturn was because his observations were informative given his models and his calculation told him that there was very little left to determine… he had constrained the mass to within about 1% of the most likely value. If he had found that it was constrained to within only 37% of the most likely value, he probably wouldn’t have considered it a fact that he knew the mass of saturn yet… he’d have looked for more information.

      • Matt Skaggs says:

        “There are plenty of examples of statistics (numbers calculated from data) and Statistics (theory, methods, models, inferences, etc.) being used to help in the development of something we now take as a fact or distinguishing ‘real’ from ‘not real.’ I am not anti-Statistics, I’m just pro- understanding its limitations and not over-selling it.”

        This porridge seems just right. One thing that I think is often confused is the limits we encounter in measurement precision, versus the limits we encounter in confidence in our inference. If a researcher has only ever pondered questions in social science, the two might seem the same. But in many real world instances, the fact that we cannot measure quantities perfectly has zero effect on the confidence we have that our result is sufficiently robust. Here is a 1994 paper by Naomi Oreskes, published in Science, that totally misses the point about measurement precision:

        http://people.uncw.edu/borretts/courses/bio534/readings/Oreskes1994_verification_validation_confirmation_of_numerical_models_in_the_earth_science.pdf

        She does correctly endorse Jim’s point about validation, though!

    • More Anonymous says:

      Anonymous (or anyone) — Re: Laplace, that’s a fascinating example. Is there an article, book, or etc. you can recommend on it?

      • Terry says:

        That’s a pretty amazing story about Jupiter and Saturn and Laplace. He was able to do something that subtle with the tools available two hundred years ago.

        He did a lot of things. From Wikipedia:

        Laplace formulated Laplace’s equation, and pioneered the Laplace transform

        What are the odds! Laplace made two scientific breakthroughs that had the same name as him! You can’t tell me that’s just a coincidence. Like Lou Gehrig dying of Lou Gehrig disease! People scoff at conspiracy theories, but once you begin to connect the dots, it is obvious that conspiracies are everywhere.

  7. David Austin says:

    On science and facts (or perhaps on “science” and “facts”):

    From a large, southeastern US, STEM-oriented university, in a relatively large (enrollment averaging over 200/term) general education course with about half of those enrolled having declared STEM majors:

    Beginning of term survey Fall 1986-Spring 2010 (N about 10,000)
    The main aim of science is to discover facts. 74% True (Choices were True/False)
    (The percentage varied by no more than a couple of points from term to term.)

    Fall 2010 – Spring 2015 N=1936 (5 item Likert scale)
    The main and defining aim of science is to find facts.
    Beginning of term survey 17% (Strongly Disagree + Disagree)
    End of term survey 26% (Strongly Disagree + Disagree)

    With an increase in safisficing behavior, the surveys were discontinued after Spring 2015. (There is good reason to believe that students were being asked to complete many surveys outside the course.)

    Fall 2015 – Spring 2016 (N=419) Pre-Test + Post-Test, open-book, online and not proctored, time limit 1 hour, together worth 10% of course grade, where completing both was necessary (but not sufficient) for passing the course (N=419 for each of the Pre-Test and the Post-Test)
    The main and defining aim of science is to find facts.
    Pre-Test 53% False
    Post-Test 71% False

    The instructor tried hard each term to convince students, using many examples, that explanation (as well as the discovery of new things to be explained) is central in science. The instructor also told them repeatedly (orally and in required reading) during the course that the statements above are false and indeed badly misleading. The instructor’s efforts were not wholly successful. It is not clear what the full range of reasons was that students had for not rejecting the statement when they did not.

    During 1986-2015, the percentage of students who responded affirmatively on the surveys to statements endorsing some form of Young Earth Creationism dropped gradually from over 40% to about 20%. During the same period, the university’s acceptance rate for freshman applicants decreased significantly and by other common measures became more selective.

    • Martha (Smith) says:

      “The instructor also told them repeatedly (orally and in required reading) during the course that the statements above are false and indeed badly misleading.”

      One thing I needed to remind myself of repeatedly when teaching was, “Telling is not teaching”.

      • David Austin says:

        One thing that I need to remind myself of repeatedly when teaching is that some repetition is necessary; explanation, illustration and careful use of metaphor, with judicious use of humor, do not suffice.

        One thing that “science” means to at least a significant minority of these students is “not (my) religion;” for science needs facts, but religion requires not facts but faith. For those students, the remark, “It is as difficult to do good religion as it is to do good science,” is received not as a compliment to (some) religion, but a threat.

  8. jim says:

    hey! maybe “science” is different things to different people!

    To many people “science” is what delivers medical treatments or produces more oil or better windmills or Apple Watches.

    To other people “science” is what discovers why the Himalaya exist; what makes the stars in the sky, or how life came to be on earth

    To still other people “science” is something that helps you lobby for X, Y, Z.

    To other people it’s a DK book on the coffee table

    To lotsa people it’s Love Canal, DDT, Exxon Valdez, Roundup Ready crops, Bisphenol A, and it SHOULD ALL BE STOPPED NOW

    and I’d say to everyone its some variation of all of the above.

Leave a Reply to Daniel Lakeland