As usual, you can find everything on the

Drop us a line on the stan-users group if you have problems with installs or questions about Stan or coding particular models.

**New Interfaces**

We’d like to welcome two new interfaces:

- MatlabStan by Brian Lau, and

- Stan.jl (for Julia) by Rob Goedman.

The new interface home pages are linked from the Stan home page.

**New Features**

The biggest new feature is a differential equation solver (Runge-Kutta from Boost’s odeint with coupled sensitivities). We also added new cbind and rbind functions, is_nan and is_inf functions, a num_elements function, a mechanism to throw exceptions to reject samples with printed messages, and two new distributions, the Frechet and 2-parameter Pareto (both contributed by Alexey Stukalov).

**Backward Compatibility**

Stan 2.5 is fully backward compatible with earlier 2.x releases and will remain so until Stan 3 (which is not yet designed, much less scheduled).

**Revised Manual**

In addition to the ODE documentation, there is a new chapter on marginalizing discrete latent parameters with several example models, new sections on regression priors for coefficients and noise scale in ordinary, hierarchical, and multivariate settings, along with new chapters on all the algorithms used by Stan for MCMC sampling, optimization, and diagnosis, with configuration information and advice.

**Preview of 2.6 and Beyond**

Our plans for major features in the near future include stiff ODE solvers, a general MATLAB/R-style array/matrix/vector indexing and assignment syntax, and uncertainty estimates for penalized maximum likelihood estimates via Laplace approximations with second-order autodiff.

**Release Notes**

Here are the release notes.

v2.5.0 (20 October 2014) ====================================================================== New Features ---------------------------------------- * ordinary differential equation solver, implemented by coupling the user-specified system with its sensitivities (#771) * add reject() statement for user-defined rejections/exceptions (#458) * new num_elements() functions that applies to all containers (#1026) * added is_nan() and is_inf() functions (#592) * nested reverse-mode autodiff, primarily for ode solver (#1031) * added get_lp() function to remove any need for bare lp__ (#470) * new functions cbind() and rbind() like those in R (#787) * added modulus function in a way tht is consistent with integer division across platforms (#577) * exposed pareto_type_2_rng (#580) * added Frechet distribution and multi_gp_cholesky distribution (thanks to Alexey Stukalov for both) Enhancements ---------------------------------------- * removed Eigen code insertion for numeric traits and replaced with order-independent metaprogram (#1065) * cleaned up error messages to provide clearer error context and more informative messages (#640) * extensive tests for higher order autodiff in densities (#823) * added context factory * deprecated lkj_cov density (#865) * trying again with informational/rejection message (#223) * more code moved from interfaces into Stan common libraries, including a var_context factory for configuration * moved example models to own repo (stan-dev/example-models) and included as submodule for stan-dev/stan (#314) * added per-iteration interrupt handler to BFGS optimizer (#768) * worked around unused function warnings from gcc (#796) * fixed error messages in vector to array conversion (#579, thanks Kevin S. Van Horn) * fixed gp-fit.stan example to be as efficient as manual version (#782) * update to Eigen version 3.2.2 (#1087) Builds ---------------------------------------- * pull out testing into Python script for developers to simplify makes * libstan dependencies handled properly and regenerate dependencies, including working around bug in GNU make 3.8.1 (#1058, #1061, #1062) Bug Fixes ---------------------------------------- * deal with covariant return structure in functions (allows data-only variables to alternate with parameter version); involved adding new traits metaprograms promote_scalar and promote_scalar_type (#849) * fixed error message on check_nonzero_size (#1066) * fix arg config printing after random seed generation (#1049) * logical conjunction and disjunction operators short circuit (#593) * clean up parser bug preventing variables starting with reserved names (#866) * fma() function calls underlying platform fma (#667) * remove upper bound on number of function arguments (#867) * cleaned up code to remove compiler warnings (#1034) * share likely() and unlikely() macros to avoid redundancy warnings (#1002) * complete review of function library for NaN behavior and consistency of calls for double and autodiff values, with extensive documentation and extensive new unit tests for this and other, enhances NaN testing in built-in test functions (several dozen issues in the #800 to #902 range) * fixing Eigen assert bugs with NO_DEBUG in tests (#904) * fix to makefile to allow builds in g++ 4.4 (thanks to Ewan Dunbar) * fix precedence of exponentiation in language (#835) * allow size zero inputs in data and initialization (#683) Documentation ---------------------------------------- * new chapter on differential equation solver * new sections on default priors for regression coefficients and scales, including hierarchical and multivariate based on full Cholesky parameterization * new part on algorithms, which chapters on HMC/NUTS, optimization, and diagnostics * new chapter on models with latent discrete parameters * using latexmk through make for LaTeX compilation * changed page numbers to beg contiguous throughout so page numbers match PDF viewer page number * all user-supplied corrections applied from next-manual issue * section on identifiability with priors, including discussion of K-1 parameterization of softmax and IRT * new section on convergence monitoring * extensive corrections from Andrew Gelman on regression models and notation * added discussion of hurdle model in zero inflation section * update built-in function doc to clarify several behaviors (#1025)

Very nice!

Congrats to the Stan team.

Yay! I get to play with ODEs! (also, congratulations!)

You have no idea how excited this makes me. I have potentially lots of ODE based problems that Stan would be great for.

We’re still hard at work on ODEs. We will be adding stiff solvers in the near future (for systems with varying time scales) as well as wrappers to make specifying the systems easier.

We’ve also had requests for implicit function definitions and differential algebraic equations, which will probably get into Stan eventually (no designs yet).

A solver with variable step-size like runge-kutta-fehlberg would be useful too. This is slightly different than a stiff solver, but related, as often you only need small time steps during small regions of time (or whatever the independent var is). Perhaps Boost already does that?

Great news indeed! Can’t wait to try it out. I generally use stiff solvers, so good to hear they are in the pipeline. Keep up the good work guys!

Were there plans to implement variational inference? I thought I remembered that was in the plans for coming releases (maybe even this year?).

+1 variational inference!

That could well be the tipping point for me as a machine learner to start doing more of my work inside Stan. Knowing that I can choose from a range of approximate inference options is very nice. Having penalised maximum likelihood plus now Laplace approximations available as well as state-of-the-art MCMC is very nice. Variational methods would be the icing on the cake. (I imagine you’d need some extra specification for the form of the variational approximation, but could then treat point estimation as a special case of it where the variational distribution is just a delta function, and potentially could re-use existing optimisation code to optimise a variational free energy once derived).

Another feature I’d love to see at some point would be stochastic gradient descent optimisation options for point estimation when you have a lots of (conditionally) independent data. With this, Stan could even be used to fit neural models and jump on the “deep learning” bandwagon ;) But potentially useful in other settings where you have lots of data and want fast convergence to a posterior mode. This could combine with variational inference options too.

I can answer both questions at once.

Yes, variational inference is in the works. Alp Kucukelbir has a prototype integrated into Stan, so it may not even be that far away. There’s still a lot of evaluation to do to see what kind of models it works for.

As to specifying variational approximations, we’re going with black-box VB with relatively simple mean-field approximations so the user won’t have to specify anything.

We want to do stochastic variational inference, as well. No plans for stochastic gradient yet, but it’d be relatively easy once we have some kind of data splitting, which we’re looking at for data-parallel expectation propagation and also for variational inference with latent parameters tied to data in a stochastic setting.

There’s an optimization technique called “simultaneous perturbation stochastic approximation” (SPSA). It works by calculating an approximate gradient by ONE function evaluation chosen randomly. I have this intuition that some modification of it could work well as a way of generating MCMC proposals without all the auto-diff stuff. Also, of course, it would be one kind of stochastic optimization method as well. Have you guys looked into it, and do you have an idea if anyone has tried adapting it to MCMC proposals??

Gah. Rather have this as a sensible default that can be changed by power users.