“Mediation analysis” is this thing where you have a treatment and an outcome and you’re trying to model how the treatment works: how much does it directly affect the outcome, and how much is the effect “mediated” through intermediate variables. Fabrizia Mealli was discussing this with me the other day, and she pointed out that the “direct effect” is defined only relative to a model: the direct effect of a treatment can be thought of as a residual effect, after considering all the other pathways being considered. This is *not* the same as simply fitting a multiple regression on the outcome and looking at the coefficient of the treatment, controlling for all the intermediate variables: that won’t work at all, but there are these methods such as path analysis or mediation analysis that can fit these models, under some assumptions.

In the real world, it’s my impression that almost all the mediation analyses that people actually fit in the social and medical sciences are misguided: lots of examples where the assumptions aren’t clear and where, in any case, coefficient estimates are hopelessly noisy and where confused people will over-interpret statistical significance (see here, for example).

So it’s natural to take what would seem to be a conservative position and forget about mediation analysis, taking the fallback position of intent-to-treat analysis or, more generally, just trying to estimate treatment effects without untangling causal pathways. And indeed that’s pretty much what I’ve done, as you can see in the causal inference chapters in our books. In specific applications I’ve worked with particular causal mechanisms, but I’ve not tried to use general techniques for mediation analysis.

But . . . more and more I’ve been coming to the conclusion that the standard causal inference paradigm is broken. I’m talking about the paradigm under which a researcher dreams up an idea for a treatment and then designs a study, collects data, and estimates “the treatment effect.” It ain’t working: nowadays, treatment effects are small and variable, not large and stable (the low-hanging fruit have already been plucked). Our little experiments don’t have enough data to allow us to estimate real-world treatment effects and their variation; at the same time, we’re not efficiently using data to come up with our treatments. On both grounds, I think the way forward has to involve intermediate outcomes and modeling/estimation of causal paths.

So how to do it? I don’t think traditional path analysis or other multivariate methods of the throw-all-the-data-in-the-blender-and-let-God-sort-em-out variety will do the job. Instead we need some structure and some prior information.

A good starting point might be the literature on causal inference for time series (longitudinal data, as they call it in biostatistics), as such models have built-in structure. Also examples such as compliance where there are some natural constraints that can be assumed on the casual relationships.

Fabrizia Mealli pointed me to some recent papers:

A Bayesian Semiparametric Approach to Intermediate Variables in Causal Inference

“Trying to make some sense of it all, but I can see it makes no sense at all . . . stuck in the middle with you”

A critical post (“trying to make sense of it all, but i can see it makes no sense at all”) about mediation analysis (“stuck in the middle”) using the above title: + 1

Picking lyrics for the blogpost title from a song i actually like on top of that: +2

“Stuck in the middle with you” – Stealers Wheel

https://www.youtube.com/watch?v=DohRa9lsx0Q

I have considered this issue repeatedly as an editor. It is almost to the point where I will not accept any conclusion from mediation analysis, unless it is viewed as purely exploratory, etc.

I tried to think through some of the issues in my own very naive way, and the result is here:

http://judgmentmisguided.blogspot.com/2016/06/alternatives-to-mediation-in-data.html

(I should note that some of my concern was inspired by a retraction of a paper that relied on a mediation analysis, several years ago:

https://retractionwatch.com/category/by-journal/judgment-and-decision-making/ .)

Strongly recommend Judea Pearl’s recent readable book. He gives a rigorous account of both direct and indirect effects. Also, we should always (to the extent possible) be thinking about mediators, because science progresses via identification of mechanisms. This is not to defend common mediation analysis. (Pearl doesn’t.) But we should draw more causal diagrams with mediators than we do.

I agree that Pearl had this figured out a while ago, but not enough people in social sciences and elsewhere listened. And it is harder to understand mediation in the potential outcomes framework, although you can write down equations about it.

If the key problem is that the total treatment effect estimates are noisy, then the fact that the total causal effect can be written as the difference between a natural direct effect and a natural indirect effect suggests that we should be trying to estimate both natural direct effects and natural indirect effects and rely on Rao-Blackwellization to obtain a more precise estimate of the total causal effect.

This got me to thinking again about a Galton board with pegs of different sizes and shapes, some of which are malleable; changing according to the nature of the pegs above and around it. So I went looking and found this, which may be of interest: https://physics.aps.org/featured-article-pdf/10.1103/PhysRevX.7.031021 The quantum walk wave function papers are interesting too.

Log-normal Distributions across the Sciences: Keys and Clues. Limpert et al 2001. http://stat.ethz.ch/~stahel/lognormal/bioscience.pdf

Very helpful. Thanks.

Thanks for the link. I’ve often linked to an old website (no longer up) called Life is Lognormal by Limpert et al.

Also of possible interest: I’ve got some links to related teaching handouts (“Logarithms and Means”, “Lognormal Distirbutions 1”, and “Lognormal 2”) at http://www.ma.utexas.edu/users/mks/ProbStatGradTeach/ProbStatGradTeachHome.html

Martha: The link to lognormal Galton board seems to be broken http://www.inf.ethz.ch/personal/gut/lognormal/

Unfortunately, it’s almost impossible to keep up with old links in old pages.

The lognormal distribution deserves to be better known but there is also a risk of putting to much faith on the lognormality of things. Looking at the first example in Table 2 I came across this paper: “Normal and lognormal data distribution in geochemistry: death of a myth.” https://link.springer.com/article/10.1007%2Fs002549900081

“Too much faith”

Putting too much faith on anything is always a risk. The solution is not to take anything on faith, but to think in each case whether or not a particular model is appropriate — and why or why not.

There is always a risk in putting too much faith in anything. The solution is to not take anything on faith, but to always ask whether or not a particular model is appropriate in a given situation — and why.

Don’t get the connection to the Galton board ( https://phaneron0.wordpress.com/2012/11/23/two-stage-quincunx-2/ )

In addition to all this, we have the conundrums of complex systems when doing causal inference in real-world interventions.

Very curious of what people think of this?

Rickles, Dean. “Causality in Complex Interventions.” Medicine, Health Care, and Philosophy 12, no. 1 (March 2009): 77–90. https://doi.org/10.1007/s11019-008-9140-4.

http://cyber.sci-hub.tw/MTAuMTAwNy9zMTEwMTktMDA4LTkxNDAtNA==/rickles2008.pdf#view=FitH

Tyler Vanderweele’s book is great. It’s called “Explanation in Causal Inference: Methods for Mediation and Interaction”

Thanks for posting these papers. As a social psychologist, I find them especially helpful. As we as a field, I think we know that mediation analyses are generally poorly done, yet they seem to be huge asset in publishing in JPSP – the field’s flagship publication. Some mediation techniques by Preacher and others have really caught on – here is a 2010 publication on multilevel mediation that’s already been cited over 1,100 times (http://psycnet.apa.org/buy/2010-18042-001) – but it would be good to know how these analyses can be conducted more rigorously.

The term “direct effects” is a misnomer, as I’ve told anyone who will listen.

Really, it just means “the portion of the treatment effect that’s mediated by some other mediator.” So it’s only defined relative to the mediator(s) you’ve chosen to model, not in and of itself.

I think that’s different, though, than saying it’s “defined only relative to a model,” since it can be defined without reference to a model, e.g. with potential outcomes.

The need to pre-specify a given causal pathway before fitting any of these models–mediation analysis or principal stratification–is one of a few big problems with the study of causal mechanisms as it stands, IMO.

On another note, I found Lindsay Page’s paper in JREE (here) and the ensuing discussion–including push-back from Tyler VanderWeele–a really good introduction to this debate.