Yes – you got it. It is also the cover of my book on information quality.

i) not everyone sees it

ii) those who do, do not necessarily know what it means…

My goodness – perhaps another vindication of Douglas Adams’ Hitchhikers Guide the the Galaxy?

]]>The usual definition of an interaction between factors A and B is (at least for two level factors) the difference between the effect of A at high B and the effect of A at low B, divided by two. The division is to make all the standard errors the same.

Using this definition, and if you assume that interactions are about half the size of main effects, 16 becomes 4.

But maybe it should be 1, at least in physical experiments. In their Bayesian method for finding active factors in fractional factorial designs, Meyer and Box (Journal of Quality Technology, 1993) assume a prior for active effects as N(0, gamma * sigma^2) and a prior for inactive effects as N(0, sigma^2), where they suggest that gamma be chosen to minimise the probability of finding no active factors. They say “important main effects and interactions tend, in our experience, to be of roughly the same order of magnitude, justifying the parsimonious choice of one common scale parameter gamma…”. In the BsProb() command in R in the BsMD package (Barrios, 2020, based on Meyer’s code) the default value of gamma is 2 (although it is possible to set different gamma values for main effects and interactions, but this appears to be rarely done).

Also, for two level experiments, it is quite common to get the magnitude of the interaction approximately equal to the magnitude of the two main effects. This just means that one combination of the two factors is unusually high or low and the other three combinations give about the same response (Daniel, 1975, page 135).

]]>lol – the _normal_ thing to do.

]]>Although I usually do assume the effect size is centered at zero!

]]>When doing power analysis and the effect size is assumed to be small and the base rate is assumed to be about 50%.

So… not often in my line of work!

]]>p(a,b) = p(a|b) p(b)

and

p({a,b}) = p(a) + p(b) – p(a,b)

]]>Yes.

]]>may be a helpful way to remember that if there are two independent sources of uncertainty in a measurement, the variance of the measurement will probably be primarily determined by the variance of the source with larger variance. I implicitly assume that the variation is Gaussian because that’s the normal thing to do.

Bob76

]]>When is the 1st one used? lol

]]>2) Emphasizes importance of 1, since it is completely useless

3) Further emphazises importance of 1, it is kind of a nice number, 2^2^2, but otherwise specially useful.

Seems evident :)

]]>