Joline, just to make sure, do you mean random effects particular for the spline coefficients? If so, what about using a multivariate normal prior for the spline coefficients with an unstructured variance-covariance matrix (potentially working with LKJ priors over correlation matrices)? Or what about using a hierarchical Gaussian Process prior for spline coefficients, where the first level would explain fixed effect spline coefficients and the second level group specific deviations? Haven‘t seen this anywhere, though…

]]>Hey Corson,

thanks very much for your model with the fixed-effects! Do you have any idea how to combine the splines with random effects instead of the fixed ones? You would be a great help.

Thanks in advance,

Joline

I _think_ there is a subtle bug in adding the intercept twice, namely here:

Y_hat = a0*to_vector(X) + to_vector(a*B);

and in bs( … , intercept = TRUE)

]]>The brms package is PHENOMENAL.

]]>For GPs, if the written material that comes up when Googling “Gaussian process tutorial” isn’t approachable, maybe try some youtube videos…?

]]>Besides (potentially) having far fewer parameters, using a spline basis allows you to easily enforce all kinds of useful properties. Want your estimate to be periodic? Use a Fourier basis. Want it to be monotonic? Use a monotonic basis. Positive? Constrain the basis coefficients to be positive. Want to guarantee that your estimate has a continuous derivative up to n-th order? No problem, B-splines can do that easily.

]]>http://jofrhwld.github.io/blog/2014/07/29/nonlinear_stan.html?

]]>For splines in the broader context of functional data analysis, I can’t recommend Ramsay and Silverman’s “Functional Data Analysis” enough.

]]>Cubic splines have a nice interpretation that makes them popular in functional data analysis: They can be viewed as the solution to a least squares problem (with smoothness constraints) for data lying in a Hilbert space. They’re also generally very fast — I routinely deal with datasets containing tens of thousands of curves, all of which need to be smoothed and interpolated, and using Gaussian process just isn’t feasible, even though they tend to be more “expressive”. It’s also easy to take derivatives of splines, which I need to do often.

There’s also a huge amount of work on spline linear/regression models, which helps. Fitting similar models using GPs would probably need A) custom programs, and B) lots of computer time.

]]>Stan Totally not ANacronym

]]>I just started a fork (https://github.com/mike-lawrence/Splines_in_Stan) that includes comparison against a GP. As Paul says, GPs are much more computationally costly, so it takes longer but does better at recovering the true function than even the penalized splines, which are still too wiggly.

]]>I have worked with both in brms and it seems as if splines can be used with far greater data sets, since computing the covariance matrix for Gaussian processes becomes increasingly complicated for more data points.

]]>