I mentioned this in class all the time this semester so I thought I should share it with the rest of you. The folk theorem is this: *When you have computational problems, often there’s a problem with your model.* I think this could be phrased more pithily—I’m not so good in the pithiness department—but in any case it’s been true in my experience.

Also relevant to the discussion is this paper from 2004 on parameterization and Bayesian modeling, which makes a related point:

Progress in statistical computation often leads to advances in statistical modeling. For example, it is surprisingly common that an existing model is reparameterized, solely for computational purposes, but then this new configuration motivates a new family of models that is useful in applied statistics. One reason why this phenomenon may not have been noticed in statistics is that reparameterizations do not change the likelihood. In a Bayesian framework, however, a transformation of parameters typically suggests a new family of prior distributions.

I'm a software engineer. In the face of a difficult debugging problem that was dragging out, I was once counseled, "If you can't win the game, change the rules." That seems a corollary of this your theorem.

How about "Computational problems are often model problems in disguise".