Skip to content

A statistical version of Arrow’s paradox


  1. K? O'Rourke says:

    Maybe just a function of not showing functions?

    How often are scientists shown nice plots of type one error rates or CI coverage as a function of true parameters and or sample features – rather than _lied_ to about these being a constants?

    Some nice (seperate?) work by Michael Fay and Scott Emmerson on how to do this.


  2. K? O'Rourke says:

    Maybe its time for some dissociative identity disorder?

    That is some pragmatic (purposeful) frequency evaluation of Bayesian methods.

    The time does seem ripe.

    Andrew had a recent post on this with a draft (re small sample size and power), Jim Berger has the Fisher, Jefferies, Neyman talks/papers, Mike Evans has the Relative Surprise Inference papers (in particular Optimal properties of some Bayesian Inferences) and Paul Gustafson and Sander Greenland have stuff on frequency evaluation of omnipitent versus wrong prior assumptions (Interval estimation for messy observational data).

    Enough reading for my next trip to the beach.

    But also someone might start organizing joint talks and journal paper sessions on this.

    The timing does seem right and my guess is the (better) scientists would welcome it.