Skip to content
Archive of posts filed under the Bayesian Statistics category.

Calibration and sharpness?

I really liked this paper, and am curious what other people think before I base a grant application around applying Stan to this problem in a machine-learning context. Gneiting, T., Balabdaoui, F., & Raftery, A. E. (2007). Probabilistic forecasts, calibration and sharpness. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 69(2), 243–268. Gneiting […]

More golf putting, leading to a discussion of how prior information can be important for an out-of-sample prediction or causal inference problem, even if it’s not needed to fit existing data

Steve Stigler writes: I saw a piece on your blog about putting. It suggests to me that you do not play golf, or you would not think this was a model. Length is much more important than you indicate. I attach an old piece by a friend that is indeed the work of a golfer! […]

Bayesian post-selection inference

Richard Artner, Francis Tuerlinckx, and Wolf Vanpaemel write: We are currently researching along the lines of model selection/averaging/misspecification and post-selection inference. As far as we understand your approach to Bayesian statistical analysis looks (drastically simplified) like this: 1. A series of models is sequentially fitted (with an increase in model complexity) whereby the types of […]

Beyond Power Calculations: Some questions, some answers

Brian Bucher (who describes himself as “just an engineer, not a statistician”) writes: I’ve read your paper with John Carlin, Beyond Power Calculations. Would you happen to know of instances in the published or unpublished literature that implement this type of design analysis, especially using your retrodesign() function [here’s an updated version from Andy Timm], […]

Yes, you can include prior information on quantities of interest, not just on parameters in your model

Nick Kavanagh writes: I studied economics in college and never heard more than a passing reference to Bayesian stats. I started to encounter Bayesian concepts in the workplace and decided to teach myself on the side. I was hoping to get your advice on a problem that I recently encountered. It has to do with […]

Bayesian Computation conference in January 2020

X writes to remind us of the Bayesian computation conference: – BayesComp 2020 occurs on 7-10 January 2020 in Gainesville, Florida, USA – Registration is open with regular rates till October 14, 2019 – Deadline for submission of poster proposals is December 15, 2019 – Deadline for travel support applications is September 20, 2019 – […]

“Beyond ‘Treatment Versus Control’: How Bayesian Analysis Makes Factorial Experiments Feasible in Education Research”

Daniel Kassler, Ira Nichols-Barrer, and Mariel Finucane write: Researchers often wish to test a large set of related interventions or approaches to implementation. A factorial experiment accomplishes this by examining not only basic treatment–control comparisons but also the effects of multiple implementation “factors” such as different dosages or implementation strategies and the interactions between these […]

Here are some examples of real-world statistical analyses that don’t use p-values and significance testing.

Joe Nadeau writes: I’ve followed the issues about p-values, signif. testing et al. both on blogs and in the literature. I appreciate the points raised, and the pointers to alternative approaches. All very interesting, provocative. My question is whether you and your colleagues can point to real world examples of these alternative approaches. It’s somewhat […]

For each parameter (or other qoi), compare the posterior sd to the prior sd. If the posterior sd for any parameter (or qoi) is more than 0.1 times the prior sd, then print out a note: “The prior distribution for this parameter is informative.”

Statistical models are placeholders. We lay down a model, fit it to data, use the fitted model to make inferences about quantities of interest (qois), check to see if the model’s implications are consistent with data and substantive information, and then go back to the model and alter, fix, update, augment, etc. Given that models […]

Conditional probability and police shootings

A political scientist writes: You might have already seen this, but in case not: PNAS published a paper [Officer characteristics and racial disparities in fatal officer-involved shootings, by David Johnson, Trevor Tress, Nicole Burkel, Carley Taylor, and Joseph Cesario] recently finding no evidence of racial bias in police shootings: Jonathan Mummolo and Dean Knox noted […]

Multilevel Bayesian analyses of the growth mindset experiment

Jared Murray, one of the coauthors of the Growth Mindset study we discussed yesterday, writes: Here are some pointers to details about the multilevel Bayesian modeling we did in the Nature paper, and some notes about ongoing & future work. We did a Bayesian analysis not dissimilar to the one you wished for! In section […]

Holes in Bayesian Philosophy: My talk for the philosophy of statistics conference this Wed.

4pm Wed 7 Aug 2019 at Virginia Tech (via videolink): Holes in Bayesian Philosophy Andrew Gelman, Department of Statistics and Department of Political Science, Columbia University Every philosophy has holes, and it is the responsibility of proponents of a philosophy to point out these problems. Here are a few holes in Bayesian data analysis: (1) […]

“This is a case where frequentist methods are simple and mostly work well, and the Bayesian analogs look unpleasant, requiring inference on lots of nuisance parameters that frequentists can bypass.”

Nick Patterson writes: I am a scientist/data analyst, still working, who has been using Bayesian methods since 1972 (getting on for 50 years), I was initially trained at the British code-breaking establishment GCHQ, by intellectual heirs of Alan Turing. I’ve been accused of being a Bayesian fanatic, but in fact a good deal of my […]

Swimming upstream? Monitoring escaped statistical inferences in wild populations.

Anders Lamberg writes: In my mails to you [a few years ago], I told you about the Norwegian practice of monitoring proportion of escaped farmed salmon in wild populations. This practice results in a yearly updated list of the situation in each Norwegian salmon river (we have a total of 450 salmon rivers, but not […]

The Economist does Mister P

Elliott Morris points us to this magazine article, “If everyone had voted, Hillary Clinton would probably be president,” which reports: Close observers of America know that the rules of its democracy often favour Republicans. But the party’s biggest advantage may be one that is rarely discussed: turnout is just 60%, low for a rich country. […]

From deviance, DIC, AIC, etc., to leave-one-out cross-validation

Maren Vranckx writes: I am writing in connection with a post on your blog on 22 June 2011 about “Deviance, DIC, AIC, cross-validation, etc”. In this post, you mentioned that you and a student worked about DIC convergence. Can you specify how you did the research? Did you discover a reason for slow convergence of […]

They’re looking to hire someone with good working knowledge of Bayesian inference algorithms development for multilevel statistical models and mathematical modeling of physiological systems.

Frederic Bois writes: We have an immediate opening for a highly motivated research / senior scientist with good working knowledge of Bayesian inference algorithms development for multilevel statistical models and mathematical modelling of physiological systems. The successful candidate will assist with the development of deterministic or stochastic methods and algorithms applicable to systems pharmacology/biology models […]

Collinearity in Bayesian models

Dirk Nachbar writes: We were having a debate about how much of a problem collinearity is in Bayesian models. I was arguing that it is not much of a problem. Imagine we have this model Y ~ N(a + bX1 + cX2, sigma) where X1 and X2 have some positive correlation (r > .5), they […]

Read this: it’s about importance sampling!

Importance sampling plays an odd role in statistical computing. It’s an old-fashioned idea and can behave just horribly if applied straight-up—but it keeps arising in different statistics problems. Aki came up with Pareto-smoothed importance sampling (PSIS) for leave-one-out cross-validation. We recently revised the PSIS article and Dan Simpson wrote a useful blog post about it […]

All I need is time, a moment that is mine, while I’m in between

You’re an ordinary boy and that’s the way I like it – Magic Dirt Look. I’ll say something now, so it’s off my chest. I hate order statisics. I loathe them. I detest them. I wish them nothing but ill and strife. They are just awful. And I’ve spent the last god only knows how long […]