That’s all a guess. But what I do know is that if you sample from the posterior of a proper Bayesian model you automatically can answer any question you want about interdependencies between the variables. The orthogonal linear algebra stuff basically insists on geometrically enforced independence.

]]>Also the contrasts need to be orthogonal to each other… but we have many questions that are not geometrically orthogonal.

]]>We’ve written a tutorial paper on this topic, it may be of interest to readers of this blog: https://arxiv.org/abs/1807.10451. Comments are welcome.

]]>sigma_bar ~ normal(0, s); // s is hyper parameter; sigma_bar > 0

phi[1:K] ~ dirichlet(a,a,a,…,a) // a is a hyper parameter

sigma[i] = sigma_bar * sqrt(K * phi[i]); // mean(sigma[1:k]^2) == sigma_bar^2;

It has worked well enough the couple of times I’ve used it

]]>The regression and Anova *models* are a special case of generalized linear models. But Anova is not just a statistical model, it’s also a way of structuring and displaying the model, batching coefficients and comparing their variances.

You’re raising a different important point which is that statisticians typically focus on the outcome variable (continuous, binary, count, zero-inflated count, etc), whereas practitioners often focus on the predictors (discrete, continuous, etc). So I agree there can be communication difficulties. Perhaps we can clarify this in Regression in Other Stories.

]]>We often see variability among subjects/classes/groups in the variability of their responses.

My usual approach is to hierarchically model the variance components as lognormal, but that doesn’t retain the benefits of half-normal priors on the variances. Is there something like a multi-level half-normal model?

]]>