Peter Ellis writes:

As part of familiarising myself with the Stan probabilistic programming language, I replicate Simon Jackman’s state space modelling with house effects of the 2007 Australian federal election. . . .

It’s not quite the model that I’d use—indeed, Ellis writes,

“I’m fairly new to Stan and I’m pretty sure my Stan programs that follow aren’t best practice, even though I am confident they work”—but it’s not so bad to see a newcomer work through the steps on a real-data example. This might inspire some of you to use Stan to fit Bayesian models on your own problems.

Again, the key advantage of Stan is flexibility in modeling: you can (and should) start with something simple, and then you can add refinements to make the model more realistic, at each step thinking about the meanings of the new parameters and about what prior information you have. It’s great if you have strong prior information to help fit the model, but it can also help to have weak prior information that regularizes—gives more stable estimates—by softly constraining the zone of reasonable values of the parameters. Go step by step, and before you know it, you’re fitting models that make much more sense than all the crude approximations you were using before.

“the key advantage of Stan is flexibility in modeling” — That’s what I emphasize whenever somebody asks me about Bayesian methods. I don’t care that much about the others differences between Bayesian and Frequentist inference. If I can easily build and fit a custom model tailored to the data generating process, that’s amazing.