we shouldn’t think of the model-building process as a blind data-fitting exercise. It’s more like we’re working toward building a larger model that makes sense, and each step in the process is a way of incorporating more information.

]]>Example how this could work in case of sequential fitting: Let’s say residual plots look like quadratic function would be a good fit. To take into account the uncertainty in model space we need to consider also other possible functions. We could integrate over large set of parametric functions, or we could do more continuous model expansion by go non-parametric and using, for example Gaussian process prior in function space. Naturally this will still leave the question which GP covariance function we should use to consider, e.g., different smoothness and stationarity assumptions, but with finite data we know that eventually we get to a point that data is not informative of higher level hyperparameters (Goel and Degroot, 1981) and adding more uncertainty layers doesn’t change what we learn, and thus carefully made sequential fitting can produce good M_*.

]]>Tried to provide a helpful intro to Bayesian workflow here – https://drive.google.com/file/d/1L74bST2KaI5bwAZRA_PhchkSK_sXW2Nw/view

Unfortunately, the red in the plots does not always show up – but there here along with the programs https://github.com/KeithORourke/BayesinWorkflowLecture

Anticipating comments about what repeatedly happens not being Bayesianly relevant, I’ll do a future post on that.

]]>