It’s easy to laugh at Stephen Wolfram, and I don’t like some of his business practices, but he’s an excellent writer and is full of interesting ideas. This long introduction to neural network prediction algorithms is an example. I have no idea if Wolfram wrote this book chapter himself or if he hired one of his paid theorem-provers to do it—I guess it’s probably some sort of collaboration—but it doesn’t really matter. It all looks really cool.

Did I get lost in the link – just a couple page surface level introduction when following trying to follow the links and checking the online version of the book.

(But the distraction proved useful in that I found something of interest to update this post http://statmodeling.stat.columbia.edu/2018/10/30/explainable-ml-versus-interpretable-ml/ )