James Robins, Tyler VanderWeele, and Richard Gill write:
Neyman introduced a formal mathematical theory of counterfactual causation that now has become standard language in many quantitative disciplines, but not in physics. We use results on causal interaction and interference between treatments (derived under the Neyman theory) to give a simple new proof of a well-known result in quantum physics, namely, Bellís inequality.
Now the predictions of quantum mechanics and the results of experiment both violate Bell’s inequality. In the remainder of the talk, we review the implications for a counterfactual theory of causation. Assuming with Einstein that faster than light (supraluminal) communication is not possible, one can view the Neyman theory of counterfactuals as falsified by experiment. . . .
Is it safe for a quantitative discipline to rely on a counterfactual approach to causation, when our best confirmed physical theory falsifies their existence?
I haven’t seen the talk, but based on the above abstract, I think Robins et al. are correct. The problem is not special to counterfactual analysis; it’s with conditional probability more generally. If you recall your college physics, you’ll realize that the results of the two-slit experiment violate the laws of joint probability, as we discussed a few years ago here and here.
Given that classical probability theory (that is, the equation P(A&B)=B(A|B)P(B)) does not fit quantum reality, it makes sense to me that the Neyman-Rubin model of causation, which in practice is always applied with probabilistic models, will not work in the quantum realm. If one tries to imagine applying potential-outcomes notation to the two-slit experiment, you’ll see that it just won’t work.
Is this relevant for macroscopic statistics? I don’t know. Here are my thoughts (with Mike Betancourt) on the matter.
I think it’s a fascinating topic.