Skip to content
Archive of posts filed under the Bayesian Statistics category.

The typical set and its relevance to Bayesian computation

tl;dr The typical set (at some level of coverage) is the set of parameter values for which the log density (the target function) is close to its expected value. As has been much discussed, this is not the same as the posterior mode. In a d-dimensional unit normal distribution with a high value of d, […]

My talk this Wed 7:30pm (NY time) / Thurs 9:30am (Australian time) at the Victorian Centre for Biostatistics

The “Victorian Centre for Biostatistics,” huh? I guess maybe I should speak about Francis Galton or something. Actually, though, I’ll be giving this talk: Bayesian workflow as demonstrated with a coronavirus example We recently fit a series of models to account for uncertainty and variation in coronavirus tests (see here). We will talk about the […]

“Frequentism-as-model”

Christian Hennig writes: Most statisticians are aware that probability models interpreted in a frequentist manner are not really true in objective reality, but only idealisations. I [Hennig] argue that this is often ignored when actually applying frequentist methods and interpreting the results, and that keeping up the awareness for the essential difference between reality and […]

Dispelling confusion about MRP (multilevel regression and poststratification) for survey analysis

A colleague pointed me to this post from political analyst Nate Silver: At the risk of restarting the MRP [multilevel regression and poststratification] wars: For the last 3 models I’ve designed (midterms, primaries, now revisiting stuff for the general) trying to impute how a state will vote based on its demographics & polls of voters […]

Korean translation of BDA3!

Hyunji Moon organized a translation of BDA3 into Korean! Here’s the link. And here is the note we wrote to accompany it: Dear Korean readers of Bayesian Data Analysis: We are happy that our book will reach a new audience through this Korean translation. Math is math and graphs are graphs, but the book is […]

Regression and Other Stories is available!

This will be, without a doubt, the most fun you’ll have ever had reading a statistics book. Also I think you’ll learn a few things reading it. I know that we learned a lot writing it. Regression and Other Stories started out as the first half of Data Analysis Using Regression and Multilevel/Hierarchical Models, but […]

Priors on effect size in A/B testing

I just saw this interesting applied-focused post by Kaiser Fung on non-significance in A/B testing. Kaiser was responding to a post by Ron Kohavi. I can’t find Kohavi’s note anywhere, but you can read Kaiser’s post to get the picture. Here I want to pick out a few sentences from Kaiser’s post: Kohavi correctly points […]

Shortest posterior intervals

By default we use central posterior intervals. For example, the central 95% interval is the (2.5%, 97.5%) quantiles. But sometimes the central interval doesn’t seem right. This came up recently with a coronavirus testing example, where the posterior distribution for the parameter of interest was asymmetric so that the central interval is not such a […]

Validating Bayesian model comparison using fake data

A neuroscience graduate student named James writes in with a question regarding validating Bayesian model comparison using synthetic data: I [James] perform an experiment and collect real data. I want to determine which of 2 candidate models best accounts for the data. I perform (approximate) Bayesian model comparison (e.g., using BIC – not ideal I […]

Stacking for Non-mixing Bayesian Computations: The Curse and Blessing of Multimodal Posteriors

Yuling, Aki, and I write: When working with multimodal Bayesian posterior distributions, Markov chain Monte Carlo (MCMC) algorithms can have difficulty moving between modes, and default variational or mode-based approximate inferences will understate posterior uncertainty. And, even if the most important modes can be found, it is difficult to evaluate their relative weights in the […]

Do we really believe the Democrats have an 88% chance of winning the presidential election?

OK, enough about coronavirus. Time to talk about the election. Dhruv Madeka starts things off with this email: Someone just forwarded me your election model (with Elliott Morris and Merlin Heidemanns) for the Economist. I noticed Biden was already at 84%. I wrote a few years ago about how the time to election factors a […]

Aki’s talk about reference models in model selection in Laplace’s demon series

I (Aki) talk about reference models in model selection in Laplace’s demon series 24 June 15UTC (Finland 18, Paris 17, New York 11). See the seminar series website for a registration link, the schedule for other talks, and the list of the recorded talks. The short summary: 1) Why a bigger model helps inference for […]

“Laplace’s Demon: A Seminar Series about Bayesian Machine Learning at Scale” and my answers to their questions

Here’s the description of the online seminar series: Machine learning is changing the world we live in at a break neck pace. From image recognition and generation, to the deployment of recommender systems, it seems to be breaking new ground constantly and influencing almost every aspect of our lives. In ths seminar series we ask […]

“Estimating the effects of non-pharmaceutical interventions on COVID-19 in Europe”

Seth Flaxman writes: Our work on non-pharmaceutical interventions in 11 European countries (originally Imperial report 13) is now published in Nature, Estimating the effects of non-pharmaceutical interventions on COVID-19 in Europe. Of note for your readers: 1) Nature has an open peer review process, so you can see the (pre-publication) peer review here. 2) Between […]

Improving our election poll aggregation model

Luke Mansillo saw our election poll aggregation model and writes: I had a look at the Stan code and I wondered if the model that you, Merlin Heidemanns, and Elliott Morris were implementing was not really Drew Linzer’s model but really Simon Jackman’s model. I realise that Linzer published Dynamic Bayesian Forecasting of Presidential Elections […]

Election 2020 is coming: Our poll aggregation model with Elliott Morris of the Economist

Here it is. The model is vaguely based on our past work on Bayesian combination of state polls and election forecasts but with some new twists. And, check it out: you can download our R and Stan source code and the data! Merlin Heidemanns wrote much of the code, which in turn is based on […]

Faster than ever before: Hamiltonian Monte Carlo using an adjoint-differentiated Laplace approximation

Charles Margossian, Aki Vehtari, Daniel Simpson, Raj Agrawal write: Gaussian latent variable models are a key class of Bayesian hierarchical models with applications in many fields. Performing Bayesian inference on such models can be challenging as Markov chain Monte Carlo algorithms struggle with the geometry of the resulting posterior distribution and can be prohibitively slow. […]

Using the rank-based inverse normal transformation

Luca La Rocca writes: You may like to know that the approach suggested in your post, Don’t do the Wilcoxon, is qualified as “common practice in Genome-Wide Association Studies”, according to this forthcoming paper in Biometrics to which I have no connection (and which I didn’t inspect beyond the Introduction). The idea is that, instead […]

Laplace’s Theories of Cognitive Illusions, Heuristics and Biases

A few years ago, Josh “Don’t call him ‘hot hand’” Miller read Laplace’s classic book on probability theory and noticed that it anticipated much of the “heuristics and biases” literature (also called “cognitive illusions” or “behavioral economics”) of the past fifty years. We wrote up our ideas and, years later, our article made it into […]

Sequential Bayesian Designs for Rapid Learning in COVID-19 Clinical Trials

This from Frank Harrell looks important: This trial will adopt a Bayesian framework. Continuous learning from data and computation of probabilities that are directly applicable to decision making in the face of uncertainty are hallmarks of the Bayesian approach. Bayesian sequential designs are the simplest of flexible designs, and continuous learning capitalizes on their efficiency, […]