As a statistician I am particularly worried about the rhetorical power of anecdotes (even though I use them in my own reasoning; see discussion below). But much can be learned from a true anecdote. The rough edges—the places where the anecdote doesn’t fit your thesis—these are where you learn.
We have recently had a discussion (here and here) of Karl Weick, a prominent scholar of business management who plagiarized a story and then went on to draw different lessons from the pilfered anecdote in several different publications published over many years.
Setting aside an issues of plagiarism and rulebreaking, I argue that, by hiding the source of the story and changing its form, Weick and his management-science audience are losing their ability to get anything out of it beyond empty confirmation.
A full discussion follows.
1. The lost Hungarian soldiers
Thomas Basbøll (who has the unusual (to me) job of “writing consultant” at the Copenhagen Business School) has been writing in different places about the a story that has been making the rounds over the past few decades among organizational sociologists and management consultants. The story started with a discovery that of plagiarism by the eminent scholar Karl Weick but then moved toward a more general exploration of storytelling and belief. (I learned about this example via an email from Basbøll (who had become aware of my interest in plagiarism); it turns out we also have a common interest in the bases of scientific and scholarly ideas.)
From Basbøll’s latest and most historical telling (linked from here, via Basbøll’s blog), supplemented by Wikipedia, I summarize what happened in time order (which is somewhat ahistorical in that it does not represent the order in which Basbøll, and perhaps Weick, learned about these events):
1916: Albert Szent-Györgyi, a medical student in Budapest, serves in World War 1.
1930: Working in Szeged, Hungary, Szent-Györgyi and his colleagues discover vitamin C. In the next several decades, he continues to make research contributions and becomes a prominent scientist, eventually moving to the U.S. after World War 2. He dies in 1986.
1972: Medical researcher Oscar Hechter reports the following in the proceedings of a “an international conference on cell membrane structure,” published in 1972:
Let me close by sharing with you a story told me by Albert Szent-Györgyi. A small group of Hungarian troops were camped in the Alps during the First World War. Their commander, a young lieutenant, decided to send out a small group of men on a scouting mission. Shortly after the scouting group left it began to snow, and it snowed steadily for two days. The scouting squad did not return, and the young officer, something of an intellectual and an idealist, suffered a paroxysm of guilt over having sent his men to their death. In his torment he questioned not only his decision to send out the scouting mission, but also the war itself and his own role in it. He was a man tormented.
Suddenly, unexpectedly, on the third day the long-overdue scouting squad returned. There was great joy, great relief in the camp, and the young commander questioned his men eagerly. “Where were you?” he asked. “How did you survive, how did you find your way back?” The sergeant who had led the scouts replied, “We were lost in the snow and we had given up hope, had resigned ourselves to die. Then one of the men found a map in his pocket. With its help we knew we could find our way back. We made camp, waited for the snow to stop, and then as soon as we could travel we returned here.” The young commander asked to see this wonderful map. It was a map not of the Alps but of the Pyrenees!
The moral of the story, as given by Hechter and by Bernard Pullman at another symposium a year later, is that the map gave the soldiers the confidence to make good decisions. Basbøll writes:
The map of the Pyrenees is like a controversial paper or a tentative model study in science. Whether or not it turns out to be an accurate representation of the “territory” is less important than the stimulus it may provide to further research, argue both Hechter and Pullman.
1977: Immunologist Miroslav Holub publishes a poem (of the prosy, non-rhyming sort) telling the lost-soldiers story (again, crediting Szent-Györgyi) in the Times Literary Supplement, translated from the Czech. Holub may have actually attended the meeting reported on by Hechter.
1982: Robert Swieringa and Karl Weick publish an article including a nearly word-for-word transcription of Holub’s poem, but not using quotation marks or acknowledging Holub at all, presenting the story as “an incident that happened,” and placing the event (implausibly) in Switzerland.
“Sometime in the mid-1980s”: Weick tells the story to Bob Engel, a “top Wall Street executive” who “was about to take leadership of a new strategic planning group at Morgan Guaranty.” The moral of the story: “When you are lost, any old map will do.”
1995: Weick writes:
What is interesting about Engel’s twist to the story is that he has described a situation that most leaders face. Followers are often lost and even the leader is not sure where to go. All the leaders know is that the plan or the map they have in front of them are not sufficient to get them out. What the leader has to do, when faced with this situation, is instill some confidence in people, get them moving some general direction, and be sure they look closely at cues created by their actions so that they learn where they were and get some better idea of where they are and where they want to be.
2005: Barbara Czarniawska reports on a 1998 talk:
“If any old map will do to help you find your way out of the Alps,” Weick had said, “then surely any old story will do to help you find your way out of puzzles in the human condition.”
As Basbøll notes, the moral of the story keeps changing! In the earliest known tellings (by Hechter and Pullman), the only clear role of the map was to calm the soldiers os they could find their way back to camp on their own. By 1998 (or 2005), the map has an actual instrumental use. What is creepy (to me) is that the story has changed from a story about people using an external device (the map) as a way to calm themselves and make a reasoned course of action, to a parable about savvy managers (“leaders”) who can manipulate their underlings, to the conclusion that “any old story will do,” a claim that makes the original events (or non-events) irrelevant and would seem to me to encourage a form of scholarship that does nothing but confirm people’s preconceptions.
This is where we come to the point about anecdotes that I discussed in the three paragraphs at the start of the post.
2006: Basbøll and Henrik Graham uncover the Weick plagiarism. Various scholars engage in discussion of the case over the next several years.
Going beyond questions of plagiarism and scholarly ethics, the Szent-Györgyi/Holub/Weick story is relevant for our discussion because it sheds (anecdotal) light on the relevance of anecdotal reasoning and the nature of evidence. Weick’s misrepresentation of Holub’s texts suggests how such flexible uses of stories can lend themselves to flexible conclusions. Reliance on anecdotes is already iffy because of selection problems, but when the stories can be altered, the concepts of confirmation and falsification are turned on their head. (Weick and his ilk may very well argue that his stories are not evidence but merely entry points for involving the audience to think about his deeper principles, but then this just pushes the question back one step: What, then, is the evidence for those principles, and what is the role of anecdotes in the translation of principles to the outside world?)
2. Which leaf on which tree are you talking about?
I was giving a talk the other day to a group of statistics graduate students, on the subject of connections between teaching and research, and I mentioned one of my favorite sayings, “God is in every leaf of every tree.” What this means is that if you study any problem carefully and seriously enough, you will come to interesting statistical research problems.
If you’ve been reading this far, the above paragraph should sound familiar. The meta-statistical principle that “God is in every leaf of every tree” is very close to Weick’s “any old story will do to help you find your way out of puzzles in the human condition.”
Now, though, let me explore the differences as well as the similarities between the two quotes. In the leaf/tree scenario, it is important—crucial—that the statistician or scientist look carefully at the particular leaf in question. By carefully trying to resolve the contradictions involved in a single sampling problem, or a single causal inference, or a single dataset, or (in Holub’s field of immunology), a single medical patient, we might take ourselves along the path to a general solution—or at least a recognition and better understanding of a problem with the current scientific formulation. But you have to study the actual leaf! Studying an abstract leaf, or a secondhand story about a leaf, won’t do the trick. Constructing the leaf from a research hypothesis, or altering the leaf to fit the hypotheses, won’t give you as much (although you can still learn something, for example if you take care to note how many changes you needed to make to keep your audience happy).
Similarly, the history of the various misrepresentations and changes to Szent-Györgyi’s story, as related and interpreted by Basbøll, give some sense of the various uses that the story has been put. Each telling comes with its own “moral” or message.
3. The methodological attribution problem
One of my meta-principle of statistics, which in my published discussion of an article of Brad Efron I call the “methodological attribution problem”:
The many useful contributions of a good statistical consultant, or collaborator, will often be attributed to the statistician’s methods or philosophy rather than to the artful efforts of the statistician himself or herself. Don Rubin has told me that scientists are fundamentally Bayesian (even if they do not realize it), in that they interpret uncertainty intervals Bayesianly. Brad Efron has talked vividly about how his scientific collaborators find permutation tests and p-values to be the most convincing form of evidence. Judea Pearl assures me that graphical models describe how people really think about causality. And so on. I am sure that all these accomplished researchers, and many more, are describing their experiences accurately. Rubin wielding a posterior distribution is a powerful thing, as is Efron with a permutation test or Pearl with a graphical model, and I believe that (a) all three can be helping people solve real scientific problems, and (b) it is natural for their collaborators to attribute some of these researchers’ creativity to their methods.
The result is that each of us tends to come away from a collaboration or consulting experience with the warm feeling that our methods really work, and that they represent how scientists really think. In stating this, I am not trying to espouse some sort of empty pluralism—the claim that, for example, we would be doing just as well if we were all using fuzzy sets, or correspondence analysis, or some other obscure statistical method. There is certainly a reason that methodological advances are made, and this reason is typically that existing methods have their failings. Nonetheless, I think we all have to be careful about attributing too much from our collaborators’ and clients’ satisfaction with our methods.
As suggested in my discussion, I came to this meta-principle through my indirect experiences of hearing various researchers talk about the efficacy of their statistical methods. Theoreticians and methodologists often have extreme confidence in their approaches, and applied researchers often have what seems to me to be oddly strong opinions about statistical methods.
Playing tennis without a map
As the saying goes, research is when you don’t know what you’re doing. We don’t have maps; much of research can’t be automated. In the spirit of Weick’s writings, let me say that statistical models and scientific theories can play useful roles even when far off from the truth.
I discussed this a few days ago in the context of the two-edged nature of belief. On the one hand, belief is powerful. By conditioning on assumptions, we can rule out alternatives and move quickly and surely. But belief is risky, especially since all of our beliefs, if stated precisely enough, our false. The resolution is that we can use the strength and power of beliefs to better study their limitations.
From a statistical (and philosophy-of-science) perspective, strong assuptions play two roles: First, with strong assumps we can (often) make strong and precise inferences. The likelihood function is a powerful thing. Second, strong assumptions are strongly checkable and falsifiable. We take our models seriously, work with them as if we believe them unquestioningly, then use the leverage from this simulation of belief to check model fit and explore discrepancies between inferences and data.
4. Building up a store of anecdotes
One difference between academic statistics and the academic study of management (as embodied by Karl Weick) is that applied statisticians such as myself live in the world of “tabletop experiments” (as they say in physics) whereas Weick (and, I assume, other scholars of his field) do “big science.” What I mean is that I work on lots of little problems and a few big ones, whereas Weick has a couple of main areas of focus. I have built up a huge store of personal anecdotes about statistics, whereas Weick must largely rely on the anecdotes of others. I’m David Sedaris, he’s Jay Leno (not the best analogy here; what would be a better example of a storyteller who uses recycled stories but tells them well?). Weick represents storytelling as an important part of his style, which puts him at a particular dependence on interpreting events that did not happen to him.
I’m not saying that I’m better than Weick in this dimension; we’re just different. A historian of the Middle Ages, for example, would have no directly relevant personal experience at all. That’s just the way it is!
Getting back to the comparison: I really do use anecdotes as evidence, as well as to illustrate existing principles. My hundreds of statistical experiences have been important in the development of my ideas. This shows up even in my published papers and in how I evaluate the methodological contributions of others: I typically trust an idea to the extent that it helps solve what seems to me to be a real applied problem.
Weick is in a different situation: he uses stories to grab his audience and perhaps to inspire him to come up with new theories or modification of existing theories. For Weick, anecdotes play the role of the map in his some of his interpretations of Szent-Györgyi’s story: It’s ok if you get the details and sourcing wrong, all that matters that it motivates you to move forward. As noted, that approach would not work in statistical research (or, I suspect, in medical research) because it would deprive us of the opportunity to learn from anecdotes’ specific features. Maybe it’s ok in Weick’s area of management research, as long as the stories and their alterations are accurately sourced.
5. Putting it all together
I have discussed several themes relating to the use of anecdotes in scientific learning. It’s easy to laugh at anecdotes, but I recognize that they are crucial in my meta-statistical understanding. That is, much of my judgment about what methods to use, comes from my own personal experiences. I suspect that many people without my breadth of statistical experience are still relying on it indirectly through my books and articles.
In other fields, though, anecdotes play a different role, and their truth or falsity, even their sources, do not seem to be so important. But I am make this judgment based on anecdotal evidence. All of this is also related, I believe, to out recent discussions of the problems of scientific publications, statistical significance, and so forth.
In particular, all of this is related to model checking. Anecdotes with accurate sourcing can reveal problems in a model, whereas when you start altering an anecdote and hiding its source, you lose a key opportunity for learning. This relates to the much-discussed selection problems in quantitative research.
6. The mysterious move to Switzerland
One of the few things that Weick adds along with his plagiarism of the Holub poem is to place the action, identified only as Hungarian soldiers in the Alps, in Switzerland. What were Hungarian troops doing in Switzerland, one might ask? My guess, following Nick Cox’s comment on our earlier discussion, is simple ignorance: Weick is American, and when we hear about the Alps, we automatically think “the Swiss Alps.” When we think about World War 1, we think about western Europe. Perhaps Weick in 1982 did not realize that not all the Alps are in Switzerland, nor was he primed to reflect upon the location of the theater of the war involving Austria-Hungary and Italy. This small error is irrelevant to the use of the story as a management parable—Weick could just as well have used a clearly fictional example such as that of Winnie the Pooh or the Sneeches or Yoruga la Tortuga (I imagine that the sort of managers who would hire a sociologist in the first place would love the morals of those stories!)—but it demonstrates the risks of copying a story without attribution. The deadly combination of word-for-word quotation and gratuitous error (as in the notorious case where Ed Wegman copied 2^n and it came out as 2n) reveals an ignorance of the underlying material, leading outsiders such as myself to question the scholar’s competence as well as his integrity.