Skip to content
Archive of posts filed under the Bayesian Statistics category.

They’re looking to hire someone with good working knowledge of Bayesian inference algorithms development for multilevel statistical models and mathematical modeling of physiological systems.

Frederic Bois writes: We have an immediate opening for a highly motivated research / senior scientist with good working knowledge of Bayesian inference algorithms development for multilevel statistical models and mathematical modelling of physiological systems. The successful candidate will assist with the development of deterministic or stochastic methods and algorithms applicable to systems pharmacology/biology models […]

Collinearity in Bayesian models

Dirk Nachbar writes: We were having a debate about how much of a problem collinearity is in Bayesian models. I was arguing that it is not much of a problem. Imagine we have this model Y ~ N(a + bX1 + cX2, sigma) where X1 and X2 have some positive correlation (r > .5), they […]

Read this: it’s about importance sampling!

Importance sampling plays an odd role in statistical computing. It’s an old-fashioned idea and can behave just horribly if applied straight-up—but it keeps arising in different statistics problems. Aki came up with Pareto-smoothed importance sampling (PSIS) for leave-one-out cross-validation. We recently revised the PSIS article and Dan Simpson wrote a useful blog post about it […]

All I need is time, a moment that is mine, while I’m in between

You’re an ordinary boy and that’s the way I like it – Magic Dirt Look. I’ll say something now, so it’s off my chest. I hate order statisics. I loathe them. I detest them. I wish them nothing but ill and strife. They are just awful. And I’ve spent the last god only knows how long […]

The garden of 603,979,752 forking paths

Amy Orben and Andrew Przybylski write: The widespread use of digital technologies by young people has spurred speculation that their regular use negatively impacts psychological well-being. Current empirical evidence supporting this idea is largely based on secondary analyses of large-scale social datasets. Though these datasets provide a valuable resource for highly powered investigations, their many […]

How does Stan work? A reading list.

Bob writes, to someone who is doing work on the Stan language: The basic execution structure of Stan is in the JSS paper (by Bob Carpenter, Andrew Matt Hoffman, Daniel Lee, Ben Goodrich, Michael Betancourt, Marcus Brubaker, Jiqiang Guo, Peter Li, and Allen Riddell) and in the reference manual. The details of autodiff are in […]

AnnoNLP conference on data coding for natural language processing

This workshop should be really interesting: Aggregating and analysing crowdsourced annotations for NLP EMNLP Workshop. November 3–4, 2019. Hong Kong. Silviu Paun and Dirk Hovy are co-organizing it. They’re very organized and know this area as well as anyone. I’m on the program committee, but won’t be able to attend. I really like the problem […]

We should be open-minded, but not selectively open-minded.

I wrote this post awhile ago but it just appeared . . . I liked this line so much I’m posting it on its own: We should be open-minded, but not selectively open-minded. This is related to the research incumbency effect and all sorts of other things we’ve talked about over the years. There’s a […]

Pharmacometrics meeting in Paris on the afternoon of 11 July 2019

Julie Bertrand writes: The pharmacometrics group led by France Mentre (IAME, INSERM, Univ Paris) is very pleased to host a free ISoP Statistics and Pharmacometrics (SxP) SIG local event at Faculté Bichat, 16 rue Henri Huchard, 75018 Paris, on Thursday afternoon the 11th of July 2019. It will features talks from Professor Andrew Gelman, Univ […]

New! from Bales/Pourzanjani/Vehtari/Petzold: Selecting the Metric in Hamiltonian Monte Carlo

Ben Bales, Arya Pourzanjani, Aki Vehtari, and Linda Petzold write: We present a selection criterion for the Euclidean metric adapted during warmup in a Hamiltonian Monte Carlo sampler that makes it possible for a sampler to automatically pick the metric based on the model and the availability of warmup draws. Additionally, we present a new […]

Peter Ellis on Forecasting Antipodal Elections with Stan

I liked this intro to Peter Ellis from Rob J. Hyndman’s talk announcement: He [Peter Ellis] started forecasting elections in New Zealand as a way to learn how to use Stan, and the hobby has stuck with him since he moved back to Australia in late 2018. You may remember Peter from my previous post […]

Against Arianism 3: Consider the cognitive models of the field

“You took my sadness out of context at the Mariners Apartment Complex” – Lana Del Rey It’s sunny, I’m in England, and I’m having a very tasty beer, and Lauren, Andrew, and I just finished a paper called The experiment is just as important as the likelihood in understanding the prior: A cautionary note on robust […]

Neural nets vs. regression models

Eliot Johnson writes: I have a question concerning papers comparing two broad domains of modeling: neural nets and statistical models. Both terms are catch-alls, within each of which there are, quite obviously, multiple subdomains. For instance, NNs could include ML, DL, AI, and so on. While statistical models should include panel data, time series, hierarchical […]

“Did Jon Stewart elect Donald Trump?”

I wrote this post a couple weeks ago and scheduled it for October, but then I learned from a reporter that the research article under discussion was retracted, so it seemed to make sense to post this right away while it was still newsworthy. My original post is below, followed by a post script regarding […]

Scandal! Mister P appears in British tabloid.

Tim Morris points us to this news article: And here’s the kicker: Mister P. Not quite as cool as the time I was mentioned in Private Eye, but it’s still pretty satisfying. My next goal: Getting a mention in Sports Illustrated. (More on this soon.) In all seriousness, it’s so cool when methods that my […]

Stan examples in Harezlak, Ruppert and Wand (2018) Semiparametric Regression with R

I saw earlier drafts of this when it was in preparation and they were great. Jarek Harezlak, David Ruppert and Matt P. Wand. 2018. Semiparametric Regression with R. UseR! Series. Springer. I particularly like the careful evaluation of variational approaches. I also very much like that it’s packed with visualizations and largely based on worked […]

A thought on Bayesian workflow: calculating a likelihood ratio for data compared to peak likelihood.

Daniel Lakeland writes: Ok, so it’s really deep into the comments and I’m guessing there’s a chance you will miss it so I wanted to point at my comments here and here. In particular, the second one, which suggests something that it might be useful to recommend for Bayesian workflows: calculating a likelihood ratio for […]

Several post-doc positions in probabilistic programming etc. in Finland

There are several open post-doc positions in Aalto and University of Helsinki in 1. probabilistic programming, 2. simulator-based inference, 3. data-efficient deep learning, 4. privacy preserving and secure methods, 5. interactive AI. All these research programs are connected and collaborating. I (Aki) am the coordinator for the project 1 and contributor in the others. Overall […]

R-squared for multilevel models

Brandon Sherman writes: I just was just having a discussion with someone about multilevel models, and the following topic came up. Imagine we’re building a multilevel model to predict SAT scores using many students. First we fit a model on students only, then students in classrooms, then students in classrooms within district, the previous case […]

“Sometimes all we have left are pictures and fear”: Dan Simpson talk in Columbia stat dept, 4pm Monday

4:10pm Monday, April 22 in Social Work Bldg room 903: Data is getting weirder. Statistical models and techniques are more complex than they have ever been. No one understand what code does. But at the same time, statistical tools are being used by a wider range of people than at any time in the past. […]