Comments on: Practical Bayesian model evaluation in Stan and rstanarm using leave-one-out cross-validation
https://statmodeling.stat.columbia.edu/2016/10/21/practical-loo/
Mon, 31 Oct 2016 03:45:06 +0000
hourly
1 https://wordpress.org/?v=5.2.2
By: Sumio Watanabe
https://statmodeling.stat.columbia.edu/2016/10/21/practical-loo/#comment-336018
Mon, 31 Oct 2016 03:45:06 +0000http://statmodeling.stat.columbia.edu/?p=31444#comment-336018Dear Professor Aki Vehtari,Pareto Smoothing Important Sampling Cross Validation (PSISCV) is a very interesting method to approximate Bayesian cross validation (BCV). Although WAIC is asymptotically equivalent to BCV, it is not an approximating tool of BCV but an estimator of the generalization error. I would like to recommend that you had better compare cross validations and information criteria from the viewpoint of statistical estimation tools for the generalization error. A simple experiment shows that there is a case E|PSISCV-GE| > E|WAIC-GE|, which is shown on my web page. I heard from statisticians that any estimator had better be studied from bias and variance.
]]>
By: Aki Vehtari
https://statmodeling.stat.columbia.edu/2016/10/21/practical-loo/#comment-335495
Sat, 29 Oct 2016 17:51:04 +0000http://statmodeling.stat.columbia.edu/?p=31444#comment-335495Cool!
]]>
By: Shravan
https://statmodeling.stat.columbia.edu/2016/10/21/practical-loo/#comment-334757
Fri, 28 Oct 2016 12:41:51 +0000http://statmodeling.stat.columbia.edu/?p=31444#comment-334757Here is an application in psycholinguistics by Bruno Nicenboim:

]]>
By: Aki Vehtari
https://statmodeling.stat.columbia.edu/2016/10/21/practical-loo/#comment-334745
Fri, 28 Oct 2016 12:17:24 +0000http://statmodeling.stat.columbia.edu/?p=31444#comment-334745I was also a big proponent of WAIC before doing the research which lead to this paper. WAIC is significant improvement compared to DIC, Watanabe’s papers are important for Bayesian LOO, but PSIS-LOO is more reliable and easier to diagnose for potential failure. See also results in Vehtari et al (2016) “Bayesian Leave-One-Out Cross-Validation Approximations for Gaussian Latent Variable Models” http://jmlr.org/papers/v17/14-540.html.
]]>
By: Aki Vehtari
https://statmodeling.stat.columbia.edu/2016/10/21/practical-loo/#comment-334736
Fri, 28 Oct 2016 12:04:07 +0000http://statmodeling.stat.columbia.edu/?p=31444#comment-334736LOO can be used for marginal predictive checks (see, e.g. Gelfand, 1996, “Model determination using sampling-based methods”, or Bayesian Data Analysis, 3rd ed, pp 152-153). We’ll add an example for this in the near future.
]]>
By: Gmcirco
https://statmodeling.stat.columbia.edu/2016/10/21/practical-loo/#comment-331177
Sat, 22 Oct 2016 18:42:08 +0000http://statmodeling.stat.columbia.edu/?p=31444#comment-331177Interesting. Richard Mcelrath is a big proponent of WAIC in his book “Statistical Rethinking” I’m curious to see how these compare.
]]>
By: Mike Lawrence
https://statmodeling.stat.columbia.edu/2016/10/21/practical-loo/#comment-331109
Sat, 22 Oct 2016 16:04:03 +0000http://statmodeling.stat.columbia.edu/?p=31444#comment-331109Neat! I understand how one can use LOO for model comparison, but the paper notes that it can be useful as a posterior predictive check as well. It would be great to see an example of this latter usage. Would you be looking at the distribution of pointwise LOO values? Or maybe adding code in generated quantities that samples new observations given the model and creates a log_lik2 for these simulated samples, permitting you to loo::compare(loo::loo(log_lik),loo:loo(log_lik2))?
]]>