“it’s not true that “it all woks out””

I haven’t yet figured out a good way to make this into a pun or joke (perhaps involving a stir-fry?), so I’m tossing it out for others to try. Any takers?

PS: A Google search on “It all woks out” got about 357 results

]]>Biased effect sizes are a *feature* not a bug. In a Bayesian decision theory you describe the value of different types of errors and then choose the decision that maximizes value. If one kind of error is much better than another, you will choose the decision that tends to bias you towards the “less bad” errors and *that’s a good thing*.

]]>Psyoskeptic:

All Bayesian methods with proper priors have “biased effect sizes.” From a Bayesian standpoint, bias is not a problem because it is conditional on the true parameter value, which is never known. A Bayesian method (if the underlying model is true) will be calibrated. Calibration is conditional on the data, not on the unknown true effect size.

You can see this by running a simulation of data collected with a data-dependent stopping rule. If you simulate from the model you’re fitting, the Bayesian inferences will be calibrated. If the model is wrong, you can get miscalibration, but we always have to worry about the model being wrong; that’s another story.

]]>While it is true that a Bayesian analysis doesn’t have the Type I error inflation rate, it’s not true that “it all woks out”. In the long run, the Bayesian analyses using a data dependent stopping rule to declare the presence of an effect will have biased effect sizes that are correlated with N. There’s always going to be some bias with a data dependent stopping rule.

However, it is also the case that some data dependent stopping rules are much more problematic than others.

]]>