Here’s some things that have been going on with Stan since the last week’s roundup

**Stan**were granted a U.S. Trademark Registration No. 5,222,891 and a U.S. Serial Number: 87,237,369, respectively. Hard to feel special when there were millions of products ahead of you. Trademarked names are case insensitive and they required a black-and-white image, shown here.^{®}and the logo

**Peter Ellis**, a data analyst working for the New Zealand government, posted a nice case study, State-space modelling of the Australian 2007 federal election. His post is intended to “replicate Simon Jackman’s state space modelling [from his book and pscl package in R] with house effects of the 2007 Australian federal election.”

**Masaaki Horikoshi**provides Stan programs on GitHub for the models in Jacques J.F. Commandeur and Siem Jan Koopman’s book*Introduction to State Space Time Series Analysis*.

**Sebastian Weber**put out a first draft of the MPI specification for a map function for Stan. Mapping was introduced in Lisp with maplist(); Python uses map() and R uses sapply(). The map operation is also the first half of the parallel map-reduce pattern, which is how we’re implmenting it. The reduction involves fiddling the operands, result, and gradients into the shared autodiff graph.

**Sophia Rabe-Hesketh, Daniel Furr, and Seung Yeon Lee**, of UC Berkeley, put together a page of Resources for Stan in educational modeling; we only have another partial year left on our IES grant with Sophia.**Bill Gillespie**put together some introductory Stan lectures. Bill’s recently back from teaching Stan at the PAGE conference in Budapest.**Mitzi Morris**got her pull request merged to add compound arithmetic and assignment to the language (she did the compound declare/define before that). That means we’ll be able to write`foo[i, j] += 1`

instead of`foo[i, j] = foo[i, j] + 1`

going forward. It works for all types where the binary operation and assignment are well typed.**Sean Talts**has the first prototype of Andrew Gelman’s algorithm for max marginal modes—either posterior or likelihood. This’ll give us the same kind of maximum likelihood estimates as Doug Bates’s packages for generalized linear mixed effects models, lme4 in R and MixedModels.jl in Julia. It not only allows penalities or priors like Vince Dorie’s and Andrew’s R package blme, but it can be used for arbitrary parameters subsets in arbitrary Stan models. It shares some computational tricks for stochastic derivatives with Alp Kucukelbir’s autodiff variational inference (ADVI) algorithm.**I**got the pull request merged for the forward-mode test framework. It’s cutting down drastically on code size and improving test coverage. Thanks to Rob Trangucci for writing the finite diff functionals and to Sean Talts and Daniel Lee for feedback on the first round of testing. This should mean that we’ll have higher-order autodiff exposed soon, which means RHMC and faster autodiffed Hessians.

I just updated the post. I started it mid-week then forgot to finish it after the meeting yesterday. I’m sure there’s other stuff I forgot, too. Let me know carp@alias-i.com or comment if you have Stan-related news to report.

Will MML be automated, so one can get marginal maximums for any stan model?

I’ve really enjoyed these Weekly Roundups so far. It’s amazing to see how fast things are moving. Keep up the great work.