Statistical learning theory vs Bayesian statistics

I have come across Vapnik vs Bayesian Machine Learning – a set of notes by the philosopher of science David Corfield. I agree with his notes, and find them quite balanced, although they are not necessarily easy reading. My personal view is that SLT derives from attempts to mathematically characterize the properties of a model, whereas the Bayesian approach instead works by molding and adapting within a malleable language of models. Bayesians have a lot more flexibility with respect to what models they can create, relying on flexible general-purpose tools: having a vague posterior is often a benefit, but a computational burden. On the other hand, SLT users focus on fitting the equivalent of a MAP, being a bit haphazard about the regularization (the equivalent of a prior), but benefitting from modern optimization techniques.

Over the past few years I have enjoyed communicating with several philosophers of science, including, for example, Malcolm Forster. The philosophers attempt to read and understand the work of several research streams in the same line, and make sense of them. On the other hand, research streams take less time to understand each other and more time to perform guerilla warfare operations during anonymous paper reviews.