I prefer 50% to 95% intervals for 3 reasons:
1. Computational stability,
2. More intuitive evaluation (half the 50% intervals should contain the true value),
3. A sense that in aplications it’s best to get a sense of where the parameters and predicted values will be, not to attempt an unrealistic near-certainty.
This came up on the Stan list the other day, and Bob Carpenter added:
I used to try to validate with 95% intervals, but it was too hard because there weren’t enough cases that got excluded and you never knew what to do if 4 cases out of 30 were outside the 95% intervals.
(3) is a two-edged sword because I think people will be inclined to “read” the 50% intervals as 95% intervals out of habit, expecting higher coverage than they have. But I like the point about not trying to convey an unrealistic near-certainty (which is exactly how I think people look at 95% intervals because the p value convention at .05).
And remember to call them uncertainty intervals.