“The Taboo Against Explicit Causal Inference in Nonexperimental Psychology”

Kevin Lewis points us to this article by Michael Grosz, Julia Rohrer, and Felix Thoemmes, who write:

Causal inference is a central goal of research. However, most psychologists refrain from explicitly addressing causal research questions and avoid drawing causal inference on the basis of nonexperimental evidence. We argue that this taboo against causal inference in nonexperimental psychology impairs study design and data analysis, holds back cumulative research, leads to a disconnect between original findings and how they are interpreted in subsequent work, and limits the relevance of nonexperimental psychology for policymaking. At the same time, the taboo does not prevent researchers from interpreting findings as causal effects—the inference is simply made implicitly, and assumptions remain unarticulated. Thus, we recommend that nonexperimental psychologists begin to talk openly about causal assumptions and causal effects. Only then can researchers take advantage of recent methodological advances in causal reasoning and analysis and develop a solid understanding of the underlying causal mechanisms that can inform future research, theory, and policymakers.

What they say makes sense. But I think they are way too unskeptical about statistical methods for causal inference from nonexperimental data, what they call “recent methodological advances in causal reasoning and analysis.” See here for a more skeptical take.

Also relevant are Jennifer’s comments starting at 43:16 of this conversation.

31 thoughts on ““The Taboo Against Explicit Causal Inference in Nonexperimental Psychology”

  1. Causal inference is a central goal of research.

    I know its a tired discussion but: no its not in general. Where is the causal inference in F = ma, or any of many physical laws? In fact causal inference seems more like a heuristic used in the early phases of research on a topic.

    • Anon:

      They’re just talking about psychology, not physics. In psychology, causal inference is certainly a central goal of research. I agree it’s not the only goal, but it’s an important goal.

      • I’m fine with that, they should just say: “Causal inference is a central goal of psychology research.” Instead they imply it is the goal of research in general.

        • How could the equation have been developed without causal reasoning about the effects of the weight and speed of an object?

        • Reorganizing the math does not change the causal reason for which the original equation was developed. Physics is about understanding how the physical world works.

          It is bizarre to think that causation is a heuristic rather than the fundamental driver of the long history of physical science.

        • Do you think the desire to understand mass was driven by the prior understanding of force and acceleration? Of course not.

        • Do you think the desire to understand mass was driven by the prior understanding of force and acceleration? Of course not.

          Why else would you care about mass?

        • I will assume that you are sincere in asking what appears to be an absurd question and answer it.

          For it’s own sake. There are many reasons to be concerned about mass without being concerned about speed such that it would be absurd to include speed in the calculation.

        • For it’s own sake. There are many reasons to be concerned about mass without being concerned about speed such that it would be absurd to include speed in the calculation.

          People only cared about mass based on how many guys you needed to bring to carry something, how to make an arrow able to penetrate the hide/shield, and other stuff like that.

          If not for the relationship m = f/a, no one would care about m.

        • Anoneuoid:

          So your argument is that the more complex causal direction is what drove interest in the relationship among these ideas rather than the simpler causal direction?

          Either way we are talking causation.

        • So your argument is that the more complex causal direction is what drove interest in the relationship among these ideas rather than the simpler causal direction?

          Either way we are talking causation.

          You can look at it either way. But there is no causality in the “law”. That is why I said the central goal of research seems to be more getting rid of causal thinking. You may use it early on when trying to understand a phenomenon, but its just a stepping stone.

      • This discussion seems kind of irrelevant. Even if we think of causality as central to physics, I think that the sort of causality represented by Newton’s laws is much different than the sort of causality that arises in social science.

    • It would be better to think:

      v(t+dt) = v(t) + f(t)/m dt

      Now, causality is clear, as there is a relationship between our current point in time, and the state of the world at a point an infinitesimal in the future.

      And yes, the math is fully rigorous using nonstandard analysis.

      • Now, causality is clear, as there is a relationship between our current point in time, and the state of the world at a point an infinitesimal in the future.

        You can calculate the state of the system in the past using the same equation. Also it is weird because f(t) = m*a(t), so really m shouldn’t be there.

        • I’m not quite sure what you mean, but:

          a(t) = (v(t+dt)-v(t))/dt

          m * a = m * (v(t+dt)-v(t))/dt = f(t)

          solve for v(t+dt)

          Yes you can calculate backwards in time but our experience says that there’s an arrow of time. Because of this arrow of time, the fundamental concept of causality exists. Without the arrow of time, causality doesn’t make any sense.

          It turns out it’s a bit of a fundamental mystery as to why there is an arrow of time. This is a cutting edge fundamental physics question.

  2. Depicting a causal relationship in the standard A -> B way is very often problematic when either A or B or both are actually conceptual aggregates of processes that change over time (such as “political attitudes”, “parent-adolescent conflict” or “depression”).

    Developmental theory has long been concerned with these kinds of causal relationships, with concepts such as transactional relationships between variables and dynamic systems theories used to build theory and model phenomena where the events or traits we are interested in are the aggregate outcome of lots and lots of minute and causal events over time, where the intermediary outcomes are in turn also causes. Simple, unidirectional causal graphs will fail to capture what is actually going on in a lot of such cases.

    Longitudinal data of course helps, but the real problem is of course that the practical time-resolution of measurement is almost always too coarse. My point is really that there perhaps cannot be a data-driven understanding of phenomena that are observed at such an aggregate level, because it would depend too much on the assumptions going into the measurement. There is no escape from needing to develop detailed theory, being mindful of different levels of observation. The developmental perspective on causality is something I often feel is missing from discussions of causality in fields concerned with complex and aggregate phenomena like human behaviour. As clinical child psychologist with much of my background in developmental psychology, I find this perplexing.

  3. The road to hell is full of good intentions.

    Empirical observations are always theory-dependent. This is a central point of Popper’s critique of inductivism, but one that appears increasingly forgotten ignored or dismissed. Of course, theory-dependence could be dismissed based on ‘new experimentalist’ reasoning, such as apparently advocated in the article. Whether this is justified at all, or at least whether it is justified in the social sciences is up for debate. But it is precisely because of theory-dependence that we should be explicit about causal reasoning. Whenever we analyse the empirical relationships between variables, we do so because we have theoretical reason for expecting these relationships. Observing relationships implies that that there is a reason ‘why’ these relationships should be there in the first place. The ‘why’ may be a straightforward ‘X leads to Y’ story, or a story about how latent variables lead to values on observed variables (measurement), or how an unobserved variable affects an X and a Y (spuriousness), or any variation or extension of such stories. But ultimately, unless this theoretical reasoning is entirely tautological, it will entail an element of causality. If we want to be able to properly probe and falsify our theories, then we should do everything we can to make these theoretical causal constructs as clear and explicit as possible.

    So, I do fully agree that it incredibly counterproductive to skirt around the issue and symbolically avoid talking about causality when is evidently virtually always a fundamental part of our theory. And hence also of our interpretation of our empirical observations. One of the main contributions of the mechanistic approach to explanation, over the deductive-nomological system approach, was to force us to be explicit about the causal workings of our theoretical mechanisms. After all, the lack of explicit causal reasoning was central to the mechanistic critique of ‘naive’ positivism.

    I get it. Actual causal inference from empirics is hard, and in practice often very messy. It is great that methodologists like Rubin, Pearl, Gelman, Mayo etc. are developing frameworks that help us to better critically think about our ability to draw inferences from our observations. Any method will need to rely on at least some fundamentally untestable assumptions in order to provide us with a good estimate of the expected causal effect. Being able to articulate and hence criticise such assumptions, is a step forward. And it’s also great that some methods, like RCT’s and the quasi-experimental designs popularized by Angrist and Pischke, can sometimes help us to make certain causal inferences with fewer-, or at least under different assumptions.

    I think we really need to stop treating causality like an ontological phenomenon rather than a theoretical construct, and treating causal inferences like a light-switch that’s either on or off. However well-intentioned, ‘you can’t prove causality based on design x’-criticisms falsely implies that there exist designs that do allow you to ‘prove’ causality. It is this kind of talk that, on the one hand, contributes to researchers not clearly communicating about their theoretical reasonings and inferences, out of fear for criticism by the causality police. On the other hand, it also contributes to the proliferation and abuse of so-called ‘causal’ designs (like some of the great examples of regression-discontinuities gone wrong on this blog; or the endless stream of instrumental variables research using incredibly tenuous justifications for the exogeneity of its instruments) because hey, apparently that’s the only way to produce ‘good’ research. There is a real danger here that this has led us back to the inductivist and ‘naive’ positivist fallacies that both “falsificationists” and “mechanists” sought to redress.

    • Alex –

      Somehow I missed this comment of yours before I wrote my comment below. I agree with you and didn’t intend to draft off of what you write.

      I will add that I think that Hill’s criteria for causation is a very much underutilized tool for addressing cauality.

  4. Erling –

    I had a little trouble understanding what you wrote…but I think it is consistent with my view: when people try to go from correlational associations to assigning cauality they should collect longitudinal data whenever possible and they should always try to provide a plausible theory for the mechanism of causality.

    I think that the authors of the article Andrew posted on miss an important aspect of “refrain” with reference to the refrain to address cauality: that is that people refrain from trying to put forth a theory for the mechanism of cauality. They refrain from doing that becsuee it is thought to be against the scientific method. It is safer to pretend that you’re not theorizing about the mechanism of cauality and only deriving cauality from the evidence when it is objectively obvious. I think that often this is more a pretense than a reality. And I think that often, people have a working theory for mechanism of casualty but hold back on making it explicit because it’s risky to put forth that theory for scrutiny.

    • I see on re-reading that what I wrote there was less clear than I thought! An example from my field may make it more clear: Adolescent depression and conflict with parents correlated. Putting these constructs in a simple cause-effect scheme is pointless, because they are not discrete events, but states reciprocally influencing each other over time. It’s still probably correct to talk about causality, but transactional developmental processe are probably a better model of such causal relationships than billiard balls hitting each other (exaggerated, but only a bit).

      But I think you’re right that we agree. It’s a very good point about science norms leading to implicit or poorly articulated causality in theories, paying lip service to not inferring causality.

      • Erling –

        Yes, that’s an instructive example.

        Longitudinal data is helpful with these issues. People imply causality from cross-sectional data with an implicit assumption about direction of cauality. I’m the case you describe in would imagine that conflict with parents causes depression in adolescents. With longitudinal data, and information about sequence of phenomena, you can gain a clearer picture of whether there’s an issue such as adolescents being depressed causing conflict with their parents.

  5. Listen with interest to the podcast with Jenifer Hill and Aki Vehtary and read the 2011 review assay by Andrew.

    Three comments on all this:
    1) causality is a generalisation argument. If you make such an analysis you can generalise your findings at a higher level than, for example in a descriptive analysis. The distinction made by many between exploratory and confirmatory analysis is similar. At the core is your ability to generalise. Exploratory takes you from A to B, confirmatory much further.
    2) direct and reversed causality is called prognostic and diagnostic analytics. This related to a comment by jennifer on the goal of the analysis. Prognostics is predictive in that it helps decision makers understand the implication of a rend or out of control signal. Diagnostics precedes prognosis in that it helps you map the consequences. In industry we distinguish between monitoring, diagnosis, prognostics and prescriptive analytics.
    https://www.sciencedirect.com/science/article/pii/S2351978918301392
    3) To my ears, the podcast is all about generating information quality. A look at this framework by the authors would give them a structure to build on their excellent, and much needed, exposition https://www.wiley.com/en-us/Information+Quality%3A+The+Potential+of+Data+and+Analytics+to+Generate+Knowledge-p-9781118874448

Leave a Reply to Curious Cancel reply

Your email address will not be published. Required fields are marked *