Duncan Watts gave his new book the above title, reflecting his irritation with those annoying people who, upon hearing of the latest social science research, reply with: Duh-I-knew-that. (I don’t know how to say Duh in Australian; maybe someone can translate that for me?) I, like Duncan, am easily irritated, and I looked forward to reading the book. I enjoyed it a lot, even though it has only one graph, and that graph has a problem with its y-axis. (OK, the book also has two diagrams and a graph of fake data, but that doesn’t count.)
Before going on, let me say that I agree wholeheartedly with Duncan’s central point: social science research findings are often surprising, but the best results cause us to rethink our world in such a way that they seem completely obvious, in retrospect. (Don Rubin used to tell us that there’s no such thing as a “paradox”: once you fully understand a phenomenon, it should not seem paradoxical any more. When learning science, we sometimes speak of training our intuitions.) I’ve jumped to enough wrong conclusions in my applied research to realize that lots of things can seem obvious but be completely wrong. In his book, Duncan does a great job at describing several areas of research with which he’s been involved, explaining why this research is important for the world (not just a set of intellectual amusements) and why it’s not as obvious as one might think at first.
Everything is Obvious is half a science book and half a business book. The science is all about how information cascades make the world unpredictable, with the recurring theme that our commonsense understanding of the world is often wrong. The business part is about how a firm can use this information to make money. I’m not so interested in the business angle but I suppose that’s how you sell books these days. The business part of the book was ok—I’m not saying it was filler—it’s just that I’m not particularly interested in which format of videocassette wins, or whether Cisco Systems is a well-run company. I realize that a lot of people care about this sort of things nowadays, but I’d rather talk about sports or politics.
I also realize that the business insights are relevant for many other aspects of the social world (including but not limited to all sorts of organizations that are not businesses). So my description of the book as half “business” is by no means a disparagement or a pigeonholing; I’m just trying to place its audience a bit.
The business angle aside, Everything is Obvious can perhaps best be situated by triangulation from two other similar efforts: Predictably Irrational by psychology researcher Dan Ariely and The Tipping Point by journalist Malcolm Gladwell:
– Like Predictably Irrational, Watts’s book is all about how human reasoning is flawed, how we’ve evolved to run away from saber-toothed tigers etc and how this doesn’t work so well in the modern world. And Watts, like Ariely, is writing about his own research, so you get fun stories about scientific problem-solving. The big difference between the books is that Ariely is writing about individual psychology, while Watts is focusing on how we understand social interactions.
– Watts, like Gladwell, is interested in unexpected social pheonomena. But Watts has a more pessimistic view. Gladwell likes to tell of dramatic stories in which you can pinpoint the one thing that got a trend started, whereas Watts argues that such triggers are essentially impossible to identify ahead of time. Watts has actually done the research on this, so I’m inclined to think that he’s getting closer to the truth than Gladwell is.
Everything is Obvious has blurbs from sociologists Eric Klinenberg and Dalton Conley, the aforementioned Ariely, some business dude named Guy Kawasaki, and . . . Alan Alda. Alan Alda?? I thought it was impressive when we got Nassim Taleb to blurb Red State Blue State. But Alan Alda in on the next level. What’s the three-degrees-of-separation there? My first guess is that Alda wrote a memoir and had the same literary agent that Watts has. But maybe there’s some other explanation. (I have no reason to doubt that Alda likes Duncan’s book; my question is how it got into Alda’s hands in the first place.)
That’s the overview. Now I’ll go through and give my thoughts as I read through the book.
Where had I heard that name before?
On the first page of the book, Watts quotes a stupid quote about sociology by “the physicist and science writer John Gribben.” Hmm, John Gribbin, that’s a somewhat unusual name. Where have I seen that before? Aaahhhhh, I remember. Many years ago, a John Gribben wrote a stupid book called The Jupiter Effect, which Martin Gardner took the trouble to trash. I did a quick web search, and indeed it is the same John Gribbin! (I think I encountered Gardner’s review when it was republished in one of his books.)
The Jupiter effect was that there was some year when lots of planets were on the same side of the sun, and this was supposed to cause all sorts of disruptions on earth. It didn’t, but, more to the point, the foolishness of this idea was clear ahead of time. If Gribbin had a Ph.D. in physics, then to write a book like the Jupiter Effect, I think he’d either have to be an idiot or unscrupulous (or some other possibilities, for example he was not an idiot but just extremely gullible, or maybe he was not unscrupulous, he just needed some money fast to pay for his grandmother’s lifesaving heart surgery, or whatever). At the very lease, if this guy had a Ph.D. in physics he would probably have some friends who knew a bit of astronomy and he could’ve talked with them first before trying to write on the topic.
Sure, people write all sorts of silly things but usually they have some sort of political or religious excuse for why it’s ok to believe them. Truth is not the only important value in life, there are also other concerns such as political convictions, religious beliefs, and the simple desire to avoid offending people. Thus, I could understand someone falsifying data in order to support a political conclusion—it’s not something I would imagine doing except in extreme cases, but I can see the moral rationale for it—or even for the purpose of public health. (Recall the Linus Pauling conspiracy theory, under which the great chemist made knowingly wrong claims about the health benefits of Vitamin C in order to give millions of people the benefit of the placebo effect.) And I could imagine someone saying silly things so as not to contradict an established religion—again, religion is an important part of many people’s lives, and who am I to say that scientific truth is more important than religious faith.
But . . . the Jupiter effect??? It’s hard for me to see any explanations other than ignorance or fraud. That Ph.D. in physics suggests that the answer is fraud.
How can we think about this? My analogy is to a businessperson who held up a liquor store when he was in his late twenties, never actually got arrested for the crime, and since then has gone into the iffy-mortgage-loan business. Everyone deserves a second chance and the guy obviously has a lot of energy, but I wouldn’t actually trust him on anything.
Many years, later, this guy is still publishing in large-circulation magazines. And continuing to be a fool (as indicated by the quote on the first page of Duncan’s book).
That’s as bad as if the Journal of the American Statistical Association were publishing articles by someone who was known to have published a false theorem. And if, to top things off, these JASA papers involved unsuitable statistical methods that use the data twice. Never would happen in statistics, I’m sure.
The albedo-obsessed billionaire
Duncan points out that social science is difficult and that physicists and other authority figures often don’t recognize the discoveries made by social scientists.
I’d like to add that this is not just a problem with social science! Recall the story of Nathan Myhrvold, the physics Ph.D. and Microsoft billionaire who likes to invoke albedo (the reflectivity of surfaces) whenever he gets stuck on a physics problem. A dubious claim about reflectivity of food in cooking transmuted into a flat-out wrong claim about the relevance of reflectivity of solar panels.
I think that a lot of the problem that Duncan notes, of people thinking that “everything is obvious,” arises from trust of (the wrong) authority figures. A center-right economist (Steven Levitt) and a left-wing journalist (John Lanchester) got faked out on albedo because they believed the word of a charismatic billionaire with a Ph.D. (As noted in the above link, I could’ve had a Ph.D. in physics too, and I certainly wouldn’t trust myself on a physics question. Trust me on statistics, don’t trust me on physics.)
Duncan discusses foolish fad-following in social science; it’s happening in other fields as well.
In chapter 1, Duncan talks about some of the unwritten rules of social encounters in the subway. I know what he’s talking about. The other day I was on the train and a woman sitting a few seats away from me flicked a candy wrapper on to the platform, just as the door was closing. Pretty rude, huh? I looked over at her, then glanced at another passenger who’d seen the event. The other passenger and I maintained brief eye contact and we made a can-you-believe-that? face to each other. That seemed about right. That little bit of littering didn’t seem quite worth anyone making a comment but it was worth a look. Another time I saw someone open up her gym bag on the subway—it was filled with Coke that had spilled from a big bottle. While the train was moving, bag-holder picked up the bag, poured its sticky liquid contents all over the floor, zipped up the bag, and then looked around apologetically—as if the apology were enough to cancel out this disgusting rat-friendly bit of spillage. Lots of other passengers appeared to be grossed out by this one but nobody said anything.
Duncan also talks about locking of doors. When I was in grad school and living with roommates in a half of a two-family house in Somerville, Massachusetts, we would leave the front door unlocked if any of us were home. Friends would sometimes just stop by–can you believe it? If the door was still unlocked at 10am or so, one of us would walk downstairs and lock it.
Contingent common wisdom and belief overkill
Duncan notes that many common sayings contradict each other. For example, The early bird catches the worm, but The early worm is eaten by the bird.
Later on, he mentions my work with Delia in which we found that different people have different combinations of political views: just about any combination of beliefs can feel coherent to you if you want them to cohere. Bob Jervis called this “belief overkill”; see here for a discussion by Jon Baron.
Rationality != selfishness
In discussing rational-choice models, Duncan writes: “What is so appealing about this way of thinking is its implication that all human behavior can be understood in terms of individuals’ attempts to satisfy their preferences.”
So far, no problem: after all, “satisfy their preferences” could always just be the tautology that people do what they want to do (and, as Duncan notes, rational choice models can easily devolve into tautology). But then he follows up with some examples: “when I vote, I choose the candidate I think will best serve my interests. . . . We have children when the benefits of a family . . . . outweigh the costs of increased responsibility, diminished freedom, and extra mouths to feed.” These examples remind me of the common error of conflating rationality with selfishness. I realize that Duncan would not make this error himself but I worry that a casual reader might not notice.
Theory and survey evidence suggest that people generally vote based on what they think is best for their community and their country, not based on what will serve their interests. And a big motivation for having children is to benefit the children themselves. Such motivations can fit into rational choice theory. (As I’ve written before, I think models of rationality can complement rather than compete with psychological or sociological explanations of behavior.)
My point here is just that if you’re not careful you can jump all too quickly from rationality to selfishness. It is perfectly possible to apply rational means to other-directed ends (or, for that matter, to be irrational in pursuit of selfish gains, as illustrated by the Human Highlight Film the other day.)
Two kinds of economists’ stories
Duncan writes that economists “illustrate the power of rational choice theory in a series of stories about initially puzzling behavior that, upon closer examination, turns out to be perfectly rational.”
That’s part of the story.
But I think the real power of pop-economics as a tool for explaining life is that it has two opposite forms of explanation:
1. People are rational and respond to incentives. Behavior that looks irrational is actually completely rational once you think like an economist.
2. People are irrational and they need economists, with their open minds, to show them how to be rational and efficient.
Argument 1 is associated with “why do they do that?” sorts of puzzles. Why do they charge so much for candy at the movie theater, why are airline ticket prices such a mess, why are people drug addicts, etc. The usual answer is that there’s some rational reason for what seems like silly or self-destructive behavior.
Argument 2 is associated with “we can do better” claims such as why we should fire 80% of public-schools teachers or Moneyball-style stories about how some clever entrepreneur has made a zillion dollars by exploiting some inefficiency in the market.
The trick is knowing whether you’re gonna get 1 or 2 above. They’re complete opposites!
It’s like Freudianism: if a person does X, that’s because of trauma Y that occurred early in life. But if the person does not do X, that’s also because of Y, it’s just that this time it’s repression. You can explain anything.
Theories that can explain anything are not necessarily useless. They can give understanding and point the way to further study. But it’s good to recognize ahead of time that the story could go in either direction.
Consequences for actions vs. consequences for outcomes
Duncan describes the story of a person who killed some people while driving a car under the influence of alcohol and points out the difficult in imposing punishment. On one hand, the driver presumably didn’t intend to kill that family, thus maybe the punishment shouldn’t be so harsh. On the other hand, there would seem to be no political will for generally imposing harsh penalties on dangerous driving in the vast majority of cases where nobody gets hurt.
In thinking about this sort of example, I would separate two concerns:
1. Is the killing intentional or not? The law already distinguishes between different varieties of murder and homicide, so it doesn’t seem so relevant to me that the weapon in this case is a car.
2. Discouraging dangerous driving. I recall seeing some statistics that a small percentage of drivers are doing most of the crimes, so I’d think it would be possible to get these repeat offenders off the road.
Duncan writes, “it seems grossly disproportionate to treat every otherwise, decent honest person who has ever had a few too many drinks and driven home as a criminal and a killer. Yet, aside from the trembling hand of fate, there is no difference between these two instances.”
I disagree. First, who’s to say that the driver in question is an “otherwise decent, honest person”? I don’t know the guy, but not everyone out there is decent. And, even if you’re driving drunk, it’s possible to account for that to some extent. I was once in a taxi in Chicago where the driver reeked of alcohol. I was too lazy to get out of the cab so I went all the way to the airport. I’ll say this about the cabbie: he drove really, really carefully. I don’t think it’s too much to ask of an otherwise decent, honest person that, if he does drive drunk, that he recognize he might be impaired, that he stop at every stop sign and every yellow and red light, and that he drives below the speed limit.
After all, it’s not like this dude hadn’t driven drunk before (see point 2 above).
So, yes, there are a lot of differences between the two instances. The distinction is statistical, but real; Conditional on running a red light and killing several people while driving drunk, I’d say that this person is (a) likely not to be completely decent and honest and (b) likely to be a repeat offender.
I agree that it would be better to catch the repeat offenders before they kill, but it doesn’t seem too unreasonable, from a statistical perspective, to take away their freedom afterward.
Just to be clear: I’m not saying that Duncan is trying to get dangerous drivers off the hook, and in his chapter he discusses sociological justifications for punishment. I’m just trying to connect this thought-provoking example to the statistical idea that a small fraction of the people commit most of the offenses.
It is an oft-noted paradox that freedom requires order, that a global dictator can allow local liberalism. For example, Duncan praises “Zara, the Spanish clothing retailer that has made business press headlines for over a decade with its novel approach to satisfying customer demand.” The company tries out lots of clothing lines each year and gathers data at retail sites to see what’s selling. Fine: it’s an example of the power of experimentation. Similarly, Duncan and his colleagues at Yahoo experiment with different ideas on their website to better satisfy their customers.
It’s easy to do this because Yahoo is a mini-dictatorship, that is, a “firm.” Here’s another example: the head of a casino company says, “There are two ways to get fired from Harrah’s: stealing from the company, or failing to include a proper control group in your business experiment.”
Duncan used to work at Columbia University (I know him from having occasionally coming to his amazing Friday afternoon seminars). Can you imagine the president of Columbia saying, “There are two ways to get fired from Columbia: molesting a student, or failing to include a proper control group in your teaching experiment.”
No, I didn’t think so. But the funny thing is, I’m pretty sure Columbia would be a better university if we were required to continually work on improving our teaching and if we were required to take careful measurements and use control groups and clearly-defined treatments. Columbia is a pretty free place, though, so the administration can’t make us formally experiment in our teaching (even in the unlikely event that they wanted us to). Paradoxically, the freedom at Columbia makes it more difficult for us to learn in the sort of bottom-up way associated with Zara, Yahoo, and Harrah’s.
Accuracy of different sorts of predictions
Duncan reports that, when predicting the winners of football games, a simple statistical model (based only on home-field advantage and the recent won-lost records of the teams) performs less than 0.1 percentage point worse than the predictions based on the Vegas odds. And the odds performed only 3 percentage points than the simple prediction based on home-field advantage alone!
I assume Duncan is measuring performance as % of games predicted correctly. He reports that home teams win 58% of the time, so I suppose this means that he Vegas favorites win 61% of the time.
This particular topic is very close to my current research on complex models for voting and public opinion, and I have a few thoughts.
1. Why predict the winner? It would be more informative to predict the score differential (as in chapter 1 of Bayesian Data Analysis). By predicting just the winner, you’re adding noise, and discarding some useful information in the point spread.
2. That 61% hides some potentially useful information: with some of the games you can be pretty confident in your predictions, not so much for others.
I agree that 3% isn’t much (see our recent discussion of the gene that predicts 1% of the variation in life satisfaction surveys); still, it might be that this particular error measure is understating the amount of additional information in those Vegas odds. This is something that we have to think about when we add interactions to our voting models and get no visible improvement in aggregate predictions.
A few other things:
– I freely admit to being a below-average driver.
– It’s not true that “every day in New York City five million people ride the subways.” There are five million rides each weekday. If everybody rides twice, that would be 2.5 million people riding the train.
– Chapter 2 discussed Eric Johnson and Dan Goldstein’s finding that people are much more likely to agree to organ donation if agreement is the default option: for example, in Germany only 12% of people agree to be organ donors, while the rate is 99.9% in Austria. But figure 1 of this paper by Kieran Healy reports a rate of actual cadaveric donation as 25 per million people in Austria and about 12 per million in Germany. A factor of 2 is big, to be sure, but not quite as large as you might expect from the numbers reported earlier. The actual numbers seem small too, but I don’t really have any sense of what to expect here.
– Duncan makes one of my favorite points about prediction markets, which is that if the stakes are small, participants can be motivated to manipulate them, but if the stakes are large, the outcomes themselves can be distorted (recall the 1919 World Series). This is one reason why, much as I enjoy betting, I don’t really go with the idea of bets as a foundation for probability theory, nor am I particularly sympathetic with arguments of the put-your-money-where-your-mouth-is variety.
– Duncan writes about how, when people are paid more, they respond with an increased sense of entitlement. That’s definitely true of me: no matter how much I get paid, I think I deserve a little bit more! The horrible thing is, even knowing that this is a generic feeling, I still feel that way.
– What would have happened if Duncan had written Yahoo instead of Yahoo! at various places in the book? I’m just curious. If I had to always write that I worked at Columbia! university, I think it would bother me after awhile. But maybe not, maybe I’d just get used to it.
I recommend the book—it was very thought-provoking in a very similar way to Taleb’s books, even though Watts and Taleb have much different writing styles and philosophical perspectives. Everything is Obvious is a serious book both for the author and the reader: Watts is putting together ideas from several major research projects he has done over the past ten years or so, and thinking about how they affect our understanding of the social world and the decisions we make, as individuals and organizations, in this world.