Comments on: How about zero-excluding priors for hierarchical variance parameters to improve computation for full Bayesian inference?
https://statmodeling.stat.columbia.edu/2018/05/04/zero-excluding-priors-hierarchical-variance-parameters-improve-computation-full-bayesian-inference/
Fri, 04 May 2018 21:00:48 +0000
hourly
1 https://wordpress.org/?v=5.4.2
By: Daniel Lakeland
https://statmodeling.stat.columbia.edu/2018/05/04/zero-excluding-priors-hierarchical-variance-parameters-improve-computation-full-bayesian-inference/#comment-724319
Fri, 04 May 2018 20:28:42 +0000http://statmodeling.stat.columbia.edu/?p=39148#comment-724319I’ve been using gamma(N,(N-1)/x) priors for positive quantities a fair amount. It puts maximum density at x and as you increase N it becomes more and more concentrated around that x, N typically chosen somewhere between 3 and 20 depending on how much information you have about how close to x the parameter is.
]]>
By: Dan Simpson
https://statmodeling.stat.columbia.edu/2018/05/04/zero-excluding-priors-hierarchical-variance-parameters-improve-computation-full-bayesian-inference/#comment-724273
Fri, 04 May 2018 18:05:17 +0000http://statmodeling.stat.columbia.edu/?p=39148#comment-724273There’s a pretty easy simulation study to be done here. I’m moderately sure the answer is that they give worse predictions and are hard to calibrate.
]]>
By: pwyll
https://statmodeling.stat.columbia.edu/2018/05/04/zero-excluding-priors-hierarchical-variance-parameters-improve-computation-full-bayesian-inference/#comment-724262
Fri, 04 May 2018 17:41:03 +0000http://statmodeling.stat.columbia.edu/?p=39148#comment-724262In reply to pwyll.

Never mind, I see you’ve linked to resources in the post. Will dig into them now…

]]>
By: pwyll
https://statmodeling.stat.columbia.edu/2018/05/04/zero-excluding-priors-hierarchical-variance-parameters-improve-computation-full-bayesian-inference/#comment-724260
Fri, 04 May 2018 17:38:32 +0000http://statmodeling.stat.columbia.edu/?p=39148#comment-724260It sounds like you’re really not a fan of using an inverse-gamma prior for variance. Is there a writeup you can point me to that goes over the reasons? Thanks!
]]>
By: Jonathan (another one)
https://statmodeling.stat.columbia.edu/2018/05/04/zero-excluding-priors-hierarchical-variance-parameters-improve-computation-full-bayesian-inference/#comment-724247
Fri, 04 May 2018 17:13:30 +0000http://statmodeling.stat.columbia.edu/?p=39148#comment-724247So, instead of trying to get Stan (or whatever you’re using) to cover the entire space, you split up the problem externally into “variance = 0” and “variance excluded from 0” components

So in other words, sometimes the point null is correct?