In suggesting “a socially responsible method of announcing associations,” AT points out that, as much as we try to be rigorous about causal inference, assumptions slip in through our language:
The trouble is, causal claims have an order to them (like “aliens cause cancer”), and so do most if not all human sentences (“I like ice cream”). It’s all too tempting to read a non-directional association claim as if it were so — my (least) favourite was a radio blowhard who said that in teens, cellphone use was linked with sexual activity, and without skipping a beat angrily proclaimed that giving kids a cell phone was tantamount to exposing them to STDs. . . . So here’s a modest proposal: when possible, beat back the causal assumption by presenting an associational idea in the order least likely to be given a causal interpretation by a layperson or radio host.
Here’s AT’s example:
A random Google News headline reads: “Prolonged Use of Pacifier Linked to Speech Problems” and strongly implies a cause and effect relationship, despite the (weak) disclaimer from the quoted authors. Reverse that and you’ve got “Speech Problems linked to Prolonged Use of Pacifier” which is less insinuating.
It’s an interesting idea, and it reminds me of something that really bugs me.
In the Gospel According to Rubin (which I pretty much follow religiously, except that I use graphs all the time, and Don almost never does), we never speak of the causes of effects, only the effects of causes. This makes sense, for all the usual reasons. (For example, why did my cat die? Because she ran into the street, because a car was going too fast, because the driver wasn’t paying attention, because a bird distracted the cat, because the rain stopped so the cat went outside, etc. When you look at it this way, the question of “why” is pretty meaningless,
On the other hand, I ask Why all the time. Part of me wants to make my speech more precise, to avoid asking Why which is a meaningless question, while part of me wants a statistical framework in which the Why question makes sense. (And, no, this has nothing to do with the Rubin vs. Pearl issue, as both those frameworks define causation in terms of potential outcomes.
P.S. Keith writes: Pearl’s latest paper did cover causes of effects. From Pearl:
In particular, the paper surveys the development of mathematical tools for inferring (from a combination of data and assumptions) answers to three types of causal queries: (1) queries about the effects of potential interventions, (also called “causal effects” or “policy evaluation”) (2) queries about probabilities of counterfactuals, (including assessment of “regret,” “attribution” or “causes of effects”) and (3) queries about direct and indirect effects (also known as “mediation”).
My reply (without having actually read the cited paper): yes, I’m sure Pearl is doing something relevant here, but I’m guessing that he’s not answering (or trying to answer) the “Why did my cat die?” sort of question. The “attribution” thing (see item 2 immediately above) is important and does seem like an intermediate step between causes of effects and effects of causes.