Skip to content
Archive of posts filed under the Bayesian Statistics category.

Multilevel Bayesian analyses of the growth mindset experiment

Jared Murray, one of the coauthors of the Growth Mindset study we discussed yesterday, writes: Here are some pointers to details about the multilevel Bayesian modeling we did in the Nature paper, and some notes about ongoing & future work. We did a Bayesian analysis not dissimilar to the one you wished for! In section […]

Holes in Bayesian Philosophy: My talk for the philosophy of statistics conference this Wed.

4pm Wed 7 Aug 2019 at Virginia Tech (via videolink): Holes in Bayesian Philosophy Andrew Gelman, Department of Statistics and Department of Political Science, Columbia University Every philosophy has holes, and it is the responsibility of proponents of a philosophy to point out these problems. Here are a few holes in Bayesian data analysis: (1) […]

“This is a case where frequentist methods are simple and mostly work well, and the Bayesian analogs look unpleasant, requiring inference on lots of nuisance parameters that frequentists can bypass.”

Nick Patterson writes: I am a scientist/data analyst, still working, who has been using Bayesian methods since 1972 (getting on for 50 years), I was initially trained at the British code-breaking establishment GCHQ, by intellectual heirs of Alan Turing. I’ve been accused of being a Bayesian fanatic, but in fact a good deal of my […]

Swimming upstream? Monitoring escaped statistical inferences in wild populations.

Anders Lamberg writes: In my mails to you [a few years ago], I told you about the Norwegian practice of monitoring proportion of escaped farmed salmon in wild populations. This practice results in a yearly updated list of the situation in each Norwegian salmon river (we have a total of 450 salmon rivers, but not […]

The Economist does Mister P

Elliott Morris points us to this magazine article, “If everyone had voted, Hillary Clinton would probably be president,” which reports: Close observers of America know that the rules of its democracy often favour Republicans. But the party’s biggest advantage may be one that is rarely discussed: turnout is just 60%, low for a rich country. […]

From deviance, DIC, AIC, etc., to leave-one-out cross-validation

Maren Vranckx writes: I am writing in connection with a post on your blog on 22 June 2011 about “Deviance, DIC, AIC, cross-validation, etc”. In this post, you mentioned that you and a student worked about DIC convergence. Can you specify how you did the research? Did you discover a reason for slow convergence of […]

They’re looking to hire someone with good working knowledge of Bayesian inference algorithms development for multilevel statistical models and mathematical modeling of physiological systems.

Frederic Bois writes: We have an immediate opening for a highly motivated research / senior scientist with good working knowledge of Bayesian inference algorithms development for multilevel statistical models and mathematical modelling of physiological systems. The successful candidate will assist with the development of deterministic or stochastic methods and algorithms applicable to systems pharmacology/biology models […]

Collinearity in Bayesian models

Dirk Nachbar writes: We were having a debate about how much of a problem collinearity is in Bayesian models. I was arguing that it is not much of a problem. Imagine we have this model Y ~ N(a + bX1 + cX2, sigma) where X1 and X2 have some positive correlation (r > .5), they […]

Read this: it’s about importance sampling!

Importance sampling plays an odd role in statistical computing. It’s an old-fashioned idea and can behave just horribly if applied straight-up—but it keeps arising in different statistics problems. Aki came up with Pareto-smoothed importance sampling (PSIS) for leave-one-out cross-validation. We recently revised the PSIS article and Dan Simpson wrote a useful blog post about it […]

All I need is time, a moment that is mine, while I’m in between

You’re an ordinary boy and that’s the way I like it – Magic Dirt Look. I’ll say something now, so it’s off my chest. I hate order statisics. I loathe them. I detest them. I wish them nothing but ill and strife. They are just awful. And I’ve spent the last god only knows how long […]

The garden of 603,979,752 forking paths

Amy Orben and Andrew Przybylski write: The widespread use of digital technologies by young people has spurred speculation that their regular use negatively impacts psychological well-being. Current empirical evidence supporting this idea is largely based on secondary analyses of large-scale social datasets. Though these datasets provide a valuable resource for highly powered investigations, their many […]

How does Stan work? A reading list.

Bob writes, to someone who is doing work on the Stan language: The basic execution structure of Stan is in the JSS paper (by Bob Carpenter, Andrew Matt Hoffman, Daniel Lee, Ben Goodrich, Michael Betancourt, Marcus Brubaker, Jiqiang Guo, Peter Li, and Allen Riddell) and in the reference manual. The details of autodiff are in […]

AnnoNLP conference on data coding for natural language processing

This workshop should be really interesting: Aggregating and analysing crowdsourced annotations for NLP EMNLP Workshop. November 3–4, 2019. Hong Kong. Silviu Paun and Dirk Hovy are co-organizing it. They’re very organized and know this area as well as anyone. I’m on the program committee, but won’t be able to attend. I really like the problem […]

We should be open-minded, but not selectively open-minded.

I wrote this post awhile ago but it just appeared . . . I liked this line so much I’m posting it on its own: We should be open-minded, but not selectively open-minded. This is related to the research incumbency effect and all sorts of other things we’ve talked about over the years. There’s a […]

Pharmacometrics meeting in Paris on the afternoon of 11 July 2019

Julie Bertrand writes: The pharmacometrics group led by France Mentre (IAME, INSERM, Univ Paris) is very pleased to host a free ISoP Statistics and Pharmacometrics (SxP) SIG local event at Faculté Bichat, 16 rue Henri Huchard, 75018 Paris, on Thursday afternoon the 11th of July 2019. It will features talks from Professor Andrew Gelman, Univ […]

New! from Bales/Pourzanjani/Vehtari/Petzold: Selecting the Metric in Hamiltonian Monte Carlo

Ben Bales, Arya Pourzanjani, Aki Vehtari, and Linda Petzold write: We present a selection criterion for the Euclidean metric adapted during warmup in a Hamiltonian Monte Carlo sampler that makes it possible for a sampler to automatically pick the metric based on the model and the availability of warmup draws. Additionally, we present a new […]

Peter Ellis on Forecasting Antipodal Elections with Stan

I liked this intro to Peter Ellis from Rob J. Hyndman’s talk announcement: He [Peter Ellis] started forecasting elections in New Zealand as a way to learn how to use Stan, and the hobby has stuck with him since he moved back to Australia in late 2018. You may remember Peter from my previous post […]

Against Arianism 3: Consider the cognitive models of the field

“You took my sadness out of context at the Mariners Apartment Complex” – Lana Del Rey It’s sunny, I’m in England, and I’m having a very tasty beer, and Lauren, Andrew, and I just finished a paper called The experiment is just as important as the likelihood in understanding the prior: A cautionary note on robust […]

Neural nets vs. regression models

Eliot Johnson writes: I have a question concerning papers comparing two broad domains of modeling: neural nets and statistical models. Both terms are catch-alls, within each of which there are, quite obviously, multiple subdomains. For instance, NNs could include ML, DL, AI, and so on. While statistical models should include panel data, time series, hierarchical […]

“Did Jon Stewart elect Donald Trump?”

I wrote this post a couple weeks ago and scheduled it for October, but then I learned from a reporter that the research article under discussion was retracted, so it seemed to make sense to post this right away while it was still newsworthy. My original post is below, followed by a post script regarding […]