Boolean models (“it’s either A or (B and C)”) seem to be the natural way that we think, but additive models (“10 points if you have A, 3 points if you have B, 2 points if you have C”) seem to describe reality better—at least, the aspects of reality that I study in my research.
Additive models come naturally to political scientists and economists, including myself. We think of your political attitudes, for example, as a sum of various influences (as for example in this paper with Yair). Similarly for economists’ models of decisions in terms of latent continuous variables. But my impression is that “civilians” think in a much more Boolean way, with different factors being switches that flip you to one state or another.
And, when it comes to statistics, applied people often think Booleanly or lexicographically (“Use rule A, with rule B as a tiebreaker”) and, I think, make mistakes as a result. For example, consider the attitude that seems to be prevalent in econometrics, that you want to use an unbiased estimate and then reduce variance only as a secondary concern. As we’ve discussed elsewhere in this space, such an attitude is incoherent because in practice the only way to get an unbiased estimate is to pool data and thus assume the effect of interest does not vary. Also recall the foolish survey researchers who don’t want to let go of the fiction that they are doing theoretically-justified inference using the principles of probability sampling.
We live in an additive world that our minds try to model Booleanly. Sort of like how Mandelbrot pointed out that mountains and trees are fractals but we like to think of them as triangles, circles, and sticks (as exemplified so clearly in childrens’ drawings).