Skip to content
Archive of posts filed under the Bayesian Statistics category.

Glenn Shafer: “The Language of Betting as a Strategy for Statistical and Scientific Communication”

Glenn Shafer writes: I have joined the immense crowd writing about p-values. My proposal is to replace them with betting outcomes: the factor by which a bet against the hypothesis multiplies the money it risks. This addresses the desideratum you and Carlin identify: embrace all the uncertainty. No one will forget that the outcome of […]

BizStat: Modeling performance indicators for deals

Ben Hanowell writes: I’ve worked for tech companies for four years now. Most have a key performance indicator that seeks to measure the rate at which an event occurs. In the simplest case, think of the event as a one-off deal, say an attempt by a buy-side real estate agent to close a deal on […]

Golf example now a Stan case study!

It’s here! (and here’s the page with all the Stan case studies). In this case study, I’m following up on two earlier posts, here and here, which in turn follow up this 2002 paper with Deb Nolan. My Stan case study is an adaptation of a model fit by Columbia business school professor and golf […]

Chow and Greenland: “Unconditional Interpretations of Statistics”

Zad Chow writes: I think your readers might find this paper [“To Aid Statistical Inference, Emphasize Unconditional Descriptions of Statistics,” by Greenland and Chow] interesting. It’s a relatively short paper that focuses on how conventional statistical modeling is based on assumptions that are often in the background and dubious, such as the presence of some […]

Laplace Calling

Laplace calling to the faraway towns Now war is declared and battle come down Laplace calling to the underworld Come out of the sample, you boys and girls Laplace calling, now don’t look to us Phony Bayesmania has bitten the dust Laplace calling, see we ain’t got no swing Except for the ring of that […]

I hate Bayes factors (when they’re used for null hypothesis significance testing)

Oliver Schultheiss writes: I am a regular reader of your blog. I am also one of those psychology researchers who were trained in the NHST tradition and who is now struggling hard to retrain himself to properly understand and use the Bayes approach (I am working on my first paper based on JASP and its […]

Here’s a puzzle: Why did the U.S. doctor tell me to drink more wine and the French doctor tell me to drink less?

This recent post [link fixed], on the health effects of drinking a glass of wine a day, reminds me of a story: Several years ago my cardiologist in the U.S. recommended that I drink a glass of red wine a day for health reasons. I’m not a big drinker—probably I average something less than 100 […]

Calibration and sharpness?

I really liked this paper, and am curious what other people think before I base a grant application around applying Stan to this problem in a machine-learning context. Gneiting, T., Balabdaoui, F., & Raftery, A. E. (2007). Probabilistic forecasts, calibration and sharpness. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 69(2), 243–268. Gneiting […]

More golf putting, leading to a discussion of how prior information can be important for an out-of-sample prediction or causal inference problem, even if it’s not needed to fit existing data

Steve Stigler writes: I saw a piece on your blog about putting. It suggests to me that you do not play golf, or you would not think this was a model. Length is much more important than you indicate. I attach an old piece by a friend that is indeed the work of a golfer! […]

Bayesian post-selection inference

Richard Artner, Francis Tuerlinckx, and Wolf Vanpaemel write: We are currently researching along the lines of model selection/averaging/misspecification and post-selection inference. As far as we understand your approach to Bayesian statistical analysis looks (drastically simplified) like this: 1. A series of models is sequentially fitted (with an increase in model complexity) whereby the types of […]

Beyond Power Calculations: Some questions, some answers

Brian Bucher (who describes himself as “just an engineer, not a statistician”) writes: I’ve read your paper with John Carlin, Beyond Power Calculations. Would you happen to know of instances in the published or unpublished literature that implement this type of design analysis, especially using your retrodesign() function [here’s an updated version from Andy Timm], […]

Yes, you can include prior information on quantities of interest, not just on parameters in your model

Nick Kavanagh writes: I studied economics in college and never heard more than a passing reference to Bayesian stats. I started to encounter Bayesian concepts in the workplace and decided to teach myself on the side. I was hoping to get your advice on a problem that I recently encountered. It has to do with […]

Bayesian Computation conference in January 2020

X writes to remind us of the Bayesian computation conference: – BayesComp 2020 occurs on 7-10 January 2020 in Gainesville, Florida, USA – Registration is open with regular rates till October 14, 2019 – Deadline for submission of poster proposals is December 15, 2019 – Deadline for travel support applications is September 20, 2019 – […]

“Beyond ‘Treatment Versus Control’: How Bayesian Analysis Makes Factorial Experiments Feasible in Education Research”

Daniel Kassler, Ira Nichols-Barrer, and Mariel Finucane write: Researchers often wish to test a large set of related interventions or approaches to implementation. A factorial experiment accomplishes this by examining not only basic treatment–control comparisons but also the effects of multiple implementation “factors” such as different dosages or implementation strategies and the interactions between these […]

Here are some examples of real-world statistical analyses that don’t use p-values and significance testing.

Joe Nadeau writes: I’ve followed the issues about p-values, signif. testing et al. both on blogs and in the literature. I appreciate the points raised, and the pointers to alternative approaches. All very interesting, provocative. My question is whether you and your colleagues can point to real world examples of these alternative approaches. It’s somewhat […]

For each parameter (or other qoi), compare the posterior sd to the prior sd. If the posterior sd for any parameter (or qoi) is more than 0.1 times the prior sd, then print out a note: “The prior distribution for this parameter is informative.”

Statistical models are placeholders. We lay down a model, fit it to data, use the fitted model to make inferences about quantities of interest (qois), check to see if the model’s implications are consistent with data and substantive information, and then go back to the model and alter, fix, update, augment, etc. Given that models […]

Conditional probability and police shootings

A political scientist writes: You might have already seen this, but in case not: PNAS published a paper [Officer characteristics and racial disparities in fatal officer-involved shootings, by David Johnson, Trevor Tress, Nicole Burkel, Carley Taylor, and Joseph Cesario] recently finding no evidence of racial bias in police shootings: Jonathan Mummolo and Dean Knox noted […]

Multilevel Bayesian analyses of the growth mindset experiment

Jared Murray, one of the coauthors of the Growth Mindset study we discussed yesterday, writes: Here are some pointers to details about the multilevel Bayesian modeling we did in the Nature paper, and some notes about ongoing & future work. We did a Bayesian analysis not dissimilar to the one you wished for! In section […]

Holes in Bayesian Philosophy: My talk for the philosophy of statistics conference this Wed.

4pm Wed 7 Aug 2019 at Virginia Tech (via videolink): Holes in Bayesian Philosophy Andrew Gelman, Department of Statistics and Department of Political Science, Columbia University Every philosophy has holes, and it is the responsibility of proponents of a philosophy to point out these problems. Here are a few holes in Bayesian data analysis: (1) […]

“This is a case where frequentist methods are simple and mostly work well, and the Bayesian analogs look unpleasant, requiring inference on lots of nuisance parameters that frequentists can bypass.”

Nick Patterson writes: I am a scientist/data analyst, still working, who has been using Bayesian methods since 1972 (getting on for 50 years), I was initially trained at the British code-breaking establishment GCHQ, by intellectual heirs of Alan Turing. I’ve been accused of being a Bayesian fanatic, but in fact a good deal of my […]