Great scientists come in two varieties, which Isaiah Berlin, quoting the seventh-century-BC poet Archilochus, called foxes and hedgehogs. Foxes know many tricks, hedgehogs only one. Foxes are interested in everything, and move easily from one problem to another. Hedgehogs are interested only in a few problems which they consider fundamental, and stick with the same problems for years or decades. Most of the great discoveries are made by hedgehogs, most of the little discoveries by foxes. Science needs both hedgehogs and foxes for its healthy growth, hedgehogs to dig deep into the nature of things, foxes to explore the complicated details of our marvelous universe. Albert Einstein was a hedgehog; Richard Feynman was a fox.
This got me thinking about statisicians. I think we’re almost all foxes! The leading stasticians over the years all seem to have worked on lots of problems. Even when they have, hedghehog-like, developed systematic ideas over the years, these have been developed in a series of applications. It seems to be part of the modern ethos of statistics, that the expected path to discovery is through the dirt of applications.
I wonder if the profusion of foxes is related to statistics’s position, compared to, say, physics, as a less “mature” science. In physics and mathematics, important problems can be easy to formulate but (a) extremely difficult to solve and (b) difficult to understand the current research on the problem. It takes a hedgehog-like focus just to get close enough to the research frontier that you can consider trying to solve open problems. In contrast, in statistics, very little background is needed, not just to formulate open problems but also to acquire many of the tools needed to study them. I’m thinking here of problems such as how to include large numbers of interactions in a model. Much of the progress made by statisticians and computer scientists on this problem has been made in the context of particular applications.
Going through some great names of the past:
Laplace: possibly hedgehog-like in developing probability theory but I think of him as foxlike in working on various social-statistics applications such as surveys, that gave him the motivation needed to develop practical Bayesian methods.
Gauss: least-squares is a great achievement, but developed as a particular mathematical tool to solve some measurement error problems. In the context of his career, his statistical work is foxlike.
Galton: could be called a “hedgehog” for his obsession with regression, but I think of him as a fox with all his little examples.
Fisher: fox. Developed methods as needed. Developed theory as appropriate (or often inappropriate).
Pearson: the family of distributions smells like a hedgehog, but what’s left of it, incluidng chi-squared tests, looks like fox tracks.
Neyman: perhaps wanted to be a hedgehog but ultimately a fox, in that he made contributions to different problems of estimation and testing. I’d say the same of Wald and the other mid-century theorists: they might have wanted to be hedgehogs but there was no “theory of relativity” out there for them to discover, so they seem like foxes to me.
What about the leading statisticians of the twentieth century?
Rubin: fox. (You could call him a hedgehog for his idea that all statistics is missing data, but this is developed in a foxlike proliferation of examples.)
Chernoff: fox. Various big ideas but no single quest
Some hedgehogs: Lindley. Donoho/Johnstone. Berger. Nelder.
OK, I’m sure I’ve missed a lot of important names herein this game of foxdar. But you get the point.
P.S. TIan noticed this article also.