I missed last week and almost forgot to add this week’s.

**Jonah Gabry**returned from teaching a one-week course for a special EU research institute in Spain.

**Mitzi Morris**has been knocking out bug fixes for the parser and some pull requests to refactor the underlying type inference to clear the way for tuples, sparse matrices, and higher-order functions.

**Michael Betancourt**with help from**Sean Talts**spent last week teaching an intro course to physicists about Stan.**Charles Margossian**attended and said it went really well.

**Ben Goodrich**, in addition to handling a slew of RStan issues has been diving into the math library to define derivatives for Bessel functions.

**Aki Vehtari**has put us in touch with the MxNet developers at Amazon UK and Berlin and we had our first conference call with them to talk about adding sparse matrix functionality to Stan (Neil Lawrence is working there now).**Aki**is also working on revising the*EP as a way of life*paper and finalizing other Stan-related papers.

**Bob Carpenter**and**Andrew Gelman**have recruited**Advait Rajagopal**to help us with the Coursera specialization we’re going to offer (contingent on coming to an agreement with Columbia). The plan’s to have four course: Intro to BDA (Andrew), Stan (Bob), MCMC (Bob), and Regression and other stories (Andrew).

**Ben Bales**finished the revised pull request for vectorized RNGS. Turns out these things are much easier to write than they are to test thoroughly. Pesky problems with instantiations by integers and what not turn up.

**Daniel Lee**is getting ready for ACoP, which**Bill Gillespie**and**Charles Margossian**will also be presenting at.

**Steven Bronder**and**Rok Češnovar**, with some help from**Daniel Lee**, are going to merge the ViennaCL library for GPU matrix ops with their own specializations for derivatives in Stan into the math library. This is getting close to being real for users.

**Sean Talts**when he wasn’t teaching or learning physics has been refactoring the Jenkins test facilities. As our tests get bigger and we get more developers, it’s getting harder and harder to maintain stable continuous integration testing.

**Breck Baldwin**is taking over dealing with StanCon. Our goal is to get up to 150 registrations.

**Breck Baldwin**has also been working with**Andrew Gelman**and**Jonathan Auerbach**on non-conventional statistics training (like at Maker Fairs)—they have the beginnings of a paper. Breck’s highly recommending the math musueum in NY to see how this kind of thing’s done.

**Bob Carpenter**published a Wiki page on a Stan 3 model concept, which is probably what we’ll be going with going forward. It’s pretty much like what we have now with better const correctness and some better organized utility functions.

**Imad Ali**went to the the New England Sports Stats conference. Expect to see more models of basketball using Stan soon.

**Ben Goodrich**fixed the problem with exception handling in RStan on some platforms (always a pain because it happened on Macs and he’s not a Mac user).

**Advait Rajagopal**has been working with**Imad Ali**on adding ARMA and ARIMA time-series functions to rstanarm.

**Aki Vehtari**is working to enhance the loo package with automated code for K-fold cross validation for (g)lmer models.

**Lizzie Wolkovich**visited us for a meeting (she’s on our NumFOCUS leadership body), where she reported that she and a postdoc have been working on calibrating Stan models for phenology (look it up).

**Krzysztof Sakrejda**has been working on proper standalone function generation for Rcpp. Turns out to be tricky with their namespace requirements, but I think we have it sorted out as of today.

**Michael Andreae**has kicked off is meta-analysis and graphics project at Penn State with**Jonah Gabry**and**Ben Goodrich**chipping in.

**Ben Goodrich**also fixed the infrastructure for RStan so that multiple models may be supported more easily, which should make it much easier for R package writers to incorporate Stan models.

**Yuling Yao**gave us the rundown on where ADVI testing stands. It may falsely report convergence when it’s not at a maximum, it may converge to a local minimum, or it may converge but the Gaussian approximation may be terrible, either in terms of the posterior means or the variances. He and**Andrew Gelman**are looking at using Pareto smoothed importance sampling (a la the loo package) to try to sort out the quality of the approximation. Yuling thinks convergence is mostly scaling issues and preconditioning along with natural gradients may solve the problem. It’s nice to see grad students sink their teeth into a problem! It’d be great if we could come up a more robust ADVI implementation that had diagnostic warnings if the approximation wasn’t reliable.

“Aki Vehtari enhanced the loo package with automated code for K-fold cross validation.”

Awesome! That’s really exciting. Also really excited about potential ADVI improvements.

K-fold-CV code is work in progress, but we should have something useful available soon, and case study to follow (I just edited the post to be more accurate).

Link to Stan 3 model concept wiki page: https://github.com/stan-dev/stan/wiki/Stan-3-Model-Concept-Proposal