“When you do applied statistics, you’re acting like a scientist. Why does this matter?”: My talk at the NYR conference this Thursday

When you do applied statistics, you’re acting like a scientist. Why does this matter?

When you do applied statistics, you form hypotheses, gather data, run experiments, modify your theories, etc. Here, I’m not talking about hypotheses of the form “theta = 0” or whatever; I’m talking about hypotheses such as, “N=200 will be enough for this study” or “Instrumental variables should work on this problem” or “We can safely use the normal approximation here” or “We really need to include a measurement-error model here” or “The research question of interest is unanswerable from the data we have here; what we really need to do is . . .”, etc. Existing treatments of statistical practice and workflow (including in my own textbooks) do not really capture the way that the steps of statistical design, data collection, analysis, and decision making feel like science. We discuss the implications of this perspective and how it can make us better statisticians and data scientists.

P.S. Here’s a video of the talk.

17 thoughts on ““When you do applied statistics, you’re acting like a scientist. Why does this matter?”: My talk at the NYR conference this Thursday

  1. This reads like the requirements for a PhD program — narrowly defined small data, strict significance testing of a few a priori hypotheses, and so on. There’s nothing wrong with this formulation beyond the fact that it is a legacy of twentieth century ‘best practice’ and, therefore, doesn’t reflect the advances and changes in the discipline(s) during the last decades. For the most part, digital engineers and machine learning practitioners aren’t hobbled by such considerations. Ironically, the fields most mired in descriptive practice are among the ones most heavily criticized on this blog, e.g., psychology and the health sciences.

    Breiman, in his 2001 Two Cultures paper, foresaw the tension between descriptive and predictive modes of learning. Today, the latter receive the lion’s share of grant funding dollars, an inescapable thumbs down for description.

    • Steve:

      I don’t understand your comment at all. In my post, I don’t talk about “narrowly defined small data, strict significance testing of a few a priori hypotheses, or anything like that.” I mean, sure, I use “N = 200” as an example, but it’s just an example. I could as well have said “N = 20,000” or “N = 2 million.”

      • I don’t understand your comment. Are you saying you never paraphrase or reword an argument or position, thereby placing something into a wider context?

        To me, your response is merely a deft rhetorical trick, a tactic of denial enabling you to avoid meaningful engagement with its content.

        Thank you for responding at all.

        • Eli:

          Huh? It’s an abstract to a talk that I haven’t given yet! But I can assure you that nothing in my talk recommends significance testing, nor is there a focus on narrow definitions, small data, or a priori hypotheses. I have no doubt that in your comment you’re addressing something that’s bothering you; it’s just not anything that happens to be in the above abstract or the forthcoming talk.

      • I don’t understand those parts of Steve’s comment either. But his point about the two cultures is more serious. I largely agree with the modeling vs prediction (I wouldn’t use the term “descriptive” for the traditional statistical models, however) – but this doesn’t directly address the need for the scientific method. I don’t think it would be fair to characterize statistical modeling as “scientific” but machine learning as “not scientific.” My guess would be that Andrew would say that both classical statistical models and machine learning predictive models involve “doing science.” At least I would say that.

        I still think there is some dispute about this. Do the two cultures extend to whether machine learning requires a scientific methodology? I hope not, but it isn’t so clear to me, especially when I look at some of the machine learning applications. I’d like to think that the poor implementations of machine learning are associated with their lack of scientific rigor.

        • and I just noticed – the original comment was from Eli but Andrew is addressing Steve. My comment was based on Eli’s comment (and who is Steve?).

        • Agree, it is the same issue in both machine learning and statistics and perhaps just a mostly false distinction between modeling vs prediction.

          It came up in comparing interpretable ML to black box ML – “The trade-off between accuracy and interpretability with the first fixed data set in an application area may not hold over time. In fact, it is expected to change as either more data accumulate, the application area becomes better understood, data collection is refined or new variables are added or defined and the application area changes. In a full data science process itself, even in the first data set, one should critically assess and interpret the results and tune the processing of the data, the loss function, the evaluation metric, or anything else that is relevant. More effectively turning data into increasing knowledge about the prediction task which can then be leveraged to increase both accuracy and likely generalization. … The full data science and life-cycle process likely is different when using interpretable models. More input is needed from domain experts to produce an interpretable model that make sense to them. This should be seen as an advantage.”

  2. Andrew – this is excellent. It helps clarify the unique selling points (USPs) of applied statistics.

    Three comments:
    1. A life cycle view
    Modern applied statistics embraces a data flow starting from problem elicitation up to communication of findings and impact assessment. Academia needs to join in and develop some parts of this sequence that has often been neglected. we discussed this 5 years ago https://statmodeling.stat.columbia.edu/2017/08/21/two-papers-ron-kennett-related-workflow/

    2. Statistics versus engineering
    Skepticism and detective work is part of the job in applied statistics. You want to solve problems the best possible way. You see value in hybrid models combining mechanistic, domain specific, knowledge, with empirical modeling. This implies working closely with other disciplines. The advantage of statistics is that you can work in many disciplines, the disadvantage is that you cannot do it on your own. Engineers provide solutions. They build autonomous cars, 3D printers fancy buildings, and and provide automatic methods for feature detection and predictive analytics. Recently some merger of computer science engineering and statistics is taking place with the growing interest in interpretability and generalizability of findings. https://www.youtube.com/watch?v=ADs7fWIvuVk

    3. Data science
    The two cultures “a la Breiman” also merge. On needs an integrated view to handle modern problems. Take for example the use of cross validation and bootstrapping. This requires a concern for the data generation mechanism, i.e. cannot be simply handled by an algorithm https://www.youtube.com/watch?v=Yi-e4sMK5tA

    I hope this blog, and your talk, will raise interesting discussions on the topic.

    Thank you for posting it.

    Bst

    ron

    PS for the non technical role of a data scientist, in a sense extending the role of an industrial statistician envisaged by Deming see https://www.wiley.com/en-gb/The+Real+Work+of+Data+Science%3A+Turning+data+into+information%2C+better+decisions%2C+and+stronger+organizations-p-9781119570769

    • Statistics is also applied in engineering, manufacturing, marketing, insurance and many other fields that not even their practioneers would consider “science”.

      • Slow down there, Anon. Don’t you know the field formerly known as “Statistics” is now named “Data Science”. You gotta keep up with the times.

      • ‘Statistics is also applied in engineering, manufacturing, marketing, insurance and many other fields that not even their practioneers would consider “science”.’

        Yes, but in these applications the purpose is to actually get the correct answer to a problem. On Andrew’s blog, we talk about statistics for which the intent is to justify some personal belief, political belief, policy action and/or – at the very least – to conclude something surprising and/or pleasing. For example: if you show a 30min video about the malleability of intelligence to 30 below average students in grade 8, how much will it change their peak earning power? 15%!!! Wow!!! Surprising, pleasing *and* with policy implications! Three points! No one cares if the conclusion is actually right and it’s almost better if it looks obviously stupid. The magic of NHST can turn stupid into smart!! Brilliant!!! Genius!!!

        So I think you’re thinking of statistics from some bygone era of common sense.

  3. Apropos of nothing in the original post, although perhaps something in the comments – does anyone else feel there has been an increase in rather odd comments that have drifted in from other parts of the internet that do not have a good reputation for reasoned argument? There appears to have a bit of a trend over the last 2 years, at least from my point of view.

  4. Thank you for saying this! The reason so many educated people believe that science is (only) a body of facts is because we teach it that way. We send social science majors to do Science 101, which requires them to memorize facts. (Even the labs are focused on inferences that are pretty close to facts). Then, we send scientists to Social Science 101, which requires them to memorize facts about American Government. Even statistics classes are too often about math facts or how to do X in Y program. And we teach very little science as a system of a variety of logical inferences – and how to avoid common logical fallacies in various inferences (the most common ones are not the ones with names). I am trying to teach a class like this and because it doesn’t fit into our curriculum paradigm, no one knows where a class like that belongs.

    • Vanessa:

      If education included memorizing facts we’d all be better off. Unfortunately, the chanting in education circles over the last 20 years has been confidence building and critical thinking (how “critical thinking” can lead to correct answers without knowing any facts is not quite clear). So people who don’t know any facts now feel confident to “think critically” about whatever BS they perceive from their little 1 cubit foot cranium, kind of like preachers theorizing about the arrangement of furniture in heaven by looking through a hole in the clouds.

      Using applied statistics properly – or applied physics or applied biology or whatever other standardized methodology needs to be employed to solve a problem – requires that ppl know the standards and know how to use them. It’s not just reasoning. It’s reasoning in the context of the facts of the problem and the facts of the universe in general. If you can’t compare your outcomes to known reality and reasonably judge the likelihood that they’re true, you can’t do science. End of story.

Leave a Reply to Anonymous Cancel reply

Your email address will not be published. Required fields are marked *