Check this out! RStudio now has special features for Stan: – Improved, context-aware autocompletion for Stan files and chunks – A document outline, which allows for easy navigation between Stan code blocks – Inline diagnostics, which help to find issues while you develop your Stan model – The ability to interrupt Stan parallel workers launched […]

**Statistical computing**category.

## Limitations of “Limitations of Bayesian Leave-One-Out Cross-Validation for Model Selection”

“If you will believe in your heart and confess with your lips, surely you will be saved one day” – The Mountain Goats paraphrasing Romans 10:9 One of the weird things about working with people a lot is that it doesn’t always translate into multiple opportunities to see them talk. I’m pretty sure the only […]

## Why are functional programming languages so popular in the programming languages community?

Matthijs Vákár writes: Re the popularity of functional programming and Church-style languages in the programming languages community: there is a strong sentiment in that community that functional programming provides important high-level primitives that make it easy to write correct programs. This is because functional code tends to be very short and easy to reason about […]

## Using Stacking to Average Bayesian Predictive Distributions (with Discussion)

I’ve posted on this paper (by Yuling Yao, Aki Vehtari, Daniel Simpson, and myself) before, but now the final version has been published, along with a bunch of interesting discussions and our rejoinder. This has been an important project for me, as it answers a question that’s been bugging me for over 20 years (since […]

## “Dynamically Rescaled Hamiltonian Monte Carlo for Bayesian Hierarchical Models”

Aki points us to this paper by Tore Selland Kleppe, which begins: Dynamically rescaled Hamiltonian Monte Carlo (DRHMC) is introduced as a computationally fast and easily implemented method for performing full Bayesian analysis in hierarchical statistical models. The method relies on introducing a modified parameterisation so that the re-parameterised target distribution has close to constant […]

## A.I. parity with the West in 2020

Someone just sent me a link to an editorial by Ken Church, in the journal Natural Language Engineering (who knew that journal was still going? I’d have thought open access would’ve killed it). The abstract of Church’s column says of China, There is a bold government plan for AI with specific milestones for parity with […]

## StanCon Helsinki streaming live now (and tomorrow)

We’re streaming live right now! Thursday 08:45-17:30: YouTube Link Friday 09:00-17:00: YouTube Link Timezone is Eastern European Summer Time (EEST) +0300 UTC Here’s a link to the full program [link fixed]. There have already been some great talks and they’ll all be posted with slides and runnable source code after the conference on the Stan […]

## Three informal case studies: (1) Monte Carlo EM, (2) a new approach to C++ matrix autodiff with closures, (3) C++ serialization via parameter packs

Andrew suggested I cross-post these from the Stan forums to his blog, so here goes. Maximum marginal likelihood and posterior approximations with Monte Carlo expectation maximization: I unpack the goal of max marginal likelihood and approximate Bayes with MMAP and Laplace approximations. I then go through the basic EM algorithm (with a traditional analytic example […]

## Continuous tempering through path sampling

Yuling prepared this poster summarizing our recent work on path sampling using a continuous joint distribution. The method is really cool and represents a real advance over what Xiao-Li and I were doing in our 1998 paper. It’s still gonna have problems in high or even moderate dimensions, and ultimately I think we’re gonna need […]

## Thanks, NVIDIA

Andrew and I both received a note like this from NVIDIA: We have reviewed your NVIDIA GPU Grant Request and are happy support your work with the donation of (1) Titan Xp to support your research. Thanks! In case other people are interested, NVIDA’s GPU grant program provides ways for faculty or research scientists to […]

## Awesome MCMC animation site by Chi Feng! On Github!

Sean Talts and Bob Carpenter pointed us to this awesome MCMC animation site by Chi Feng. For instance, here’s NUTS on a banana-shaped density. This is indeed super-cool, and maybe there’s a way to connect these with Stan/ShinyStan/Bayesplot so as to automatically make movies of Stan model fits. This would be great, both to help […]

## Where do I learn about log_sum_exp, log1p, lccdf, and other numerical analysis tricks?

Richard McElreath inquires: I was helping a colleague recently fix his MATLAB code by using log_sum_exp and log1m tricks. The natural question he had was, “where do you learn this stuff?” I checked Numerical Recipes, but the statistical parts are actually pretty thin (at least in my 1994 edition). Do you know of any books/papers […]

## Divisibility in statistics: Where is it needed?

The basics of Bayesian inference is p(parameters|data) proportional to p(parameters)*p(data|parameters). And, for predictions, p(predictions|data) = integral_parameters p(predictions|parameters,data)*p(parameters|data). In these expressions (and the corresponding simpler versions for maximum likelihood), “parameters” and “data” are unitary objects. Yes, it can be helpful to think of the parameter objects as being a list or vector of individual parameters; and […]

## Anyone want to run this Bayesian computing conference in 2022?

OK, people think I’m obsessive with a blog with a 6-month lag, but that’s nothing compared to some statistics conferences. Mylène Bédard sends this along for anyone who might be interested: The Bayesian Computation Section of ISBA is soliciting proposals to host its flagship conference: Bayes Comp 2022 The expectation is that the meeting will […]

## In my role as professional singer and ham

Pryor unhooks the deer’s skull from the wall above his still-curled-up companion. Examines it. Not a good specimen –the back half of the lower jaw’s missing, a gap that, with the open cranial cavity, makes room enough for Pryor’s head. He puts it on. – Will Eaves, Murmur So as we roll into the last […]

## Yes, but did it work? Evaluating variational inference

That’s the title of a recent article by Yuling Yao, Aki Vehtari, Daniel Simpson, and myself, which presents some diagnostics for variational approximations to posterior inference: We were motivated to write this paper by the success/failure of ADVI, the automatic variational inference algorithm devised by Alp Kucukelbir et al. The success was that ADVI solved […]

## Ways of knowing in computer science and statistics

Brad Groff writes: Thought you might find this post by Ferenc Huszar interesting. Commentary on how we create knowledge in machine learning research and how we resolve benchmark results with (belated) theory. Key passage: You can think of “making a a deep learning method work on a dataset” as a statistical test. I would argue […]

## Answering the question, What predictors are more important?, going beyond p-value thresholding and ranking

Daniel Kapitan writes: We are in the process of writing a paper on the outcome of cataract surgery. A (very rough!) draft can be found here, to provide you with some context: https://www.overleaf.com/read/wvnwzjmrffmw. Using standard classification methods (Python sklearn, with synthetic oversampling to address the class imbalance), we are able to predict a poor outcome […]

## Wolfram Markdown, also called Computational Essay

I was reading Stephen Wolfram’s blog and came across this post: People are used to producing prose—and sometimes pictures—to express themselves. But in the modern age of computation, something new has become possible that I’d like to call the computational essay. I [Wolfram] have been working on building the technology to support computational essays for […]

## Comments on Limitations of Bayesian Leave-One-Out Cross-Validation for Model Selection

There is a recent pre-print Limitations of Bayesian Leave-One-Out Cross-Validation for Model Selection by Quentin Gronau and Eric-Jan Wagenmakers. Wagenmakers asked for comments and so here are my comments. Short version: They report a known limitation of LOO when it’s used in a non-recommended way for model selection. They report that their experiments show that […]