Duncan Watts gave his new book the above title, reflecting his irritation with those annoying people who, upon hearing of the latest social science research, reply with: Duh-I-knew-that. (I don’t know how to say Duh in Australian; maybe someone can translate that for me?) I, like Duncan, am easily irritated, and I looked forward to reading the book. I enjoyed it a lot, even though it has only one graph, and that graph has a problem with its y-axis. (OK, the book also has two diagrams and a graph of fake data, but that doesn’t count.)
Before going on, let me say that I agree wholeheartedly with Duncan’s central point: social science research findings are often surprising, but the best results cause us to rethink our world in such a way that they seem completely obvious, in retrospect. (Don Rubin used to tell us that there’s no such thing as a “paradox”: once you fully understand a phenomenon, it should not seem paradoxical any more. When learning science, we sometimes speak of training our intuitions.) I’ve jumped to enough wrong conclusions in my applied research to realize that lots of things can seem obvious but be completely wrong. In his book, Duncan does a great job at describing several areas of research with which he’s been involved, explaining why this research is important for the world (not just a set of intellectual amusements) and why it’s not as obvious as one might think at first.
Everything is Obvious is half a science book and half a business book. The science is all about how information cascades make the world unpredictable, with the recurring theme that our commonsense understanding of the world is often wrong. The business part is about how a firm can use this information to make money. I’m not so interested in the business angle but I suppose that’s how you sell books these days. The business part of the book was ok—I’m not saying it was filler—it’s just that I’m not particularly interested in which format of videocassette wins, or whether Cisco Systems is a well-run company. I realize that a lot of people care about this sort of things nowadays, but I’d rather talk about sports or politics.
I also realize that the business insights are relevant for many other aspects of the social world (including but not limited to all sorts of organizations that are not businesses). So my description of the book as half “business” is by no means a disparagement or a pigeonholing; I’m just trying to place its audience a bit.
The business angle aside, Everything is Obvious can perhaps best be situated by triangulation from two other similar efforts: Predictably Irrational by psychology researcher Dan Ariely and The Tipping Point by journalist Malcolm Gladwell:
– Like Predictably Irrational, Watts’s book is all about how human reasoning is flawed, how we’ve evolved to run away from saber-toothed tigers etc and how this doesn’t work so well in the modern world. And Watts, like Ariely, is writing about his own research, so you get fun stories about scientific problem-solving. The big difference between the books is that Ariely is writing about individual psychology, while Watts is focusing on how we understand social interactions.
– Watts, like Gladwell, is interested in unexpected social pheonomena. But Watts has a more pessimistic view. Gladwell likes to tell of dramatic stories in which you can pinpoint the one thing that got a trend started, whereas Watts argues that such triggers are essentially impossible to identify ahead of time. Watts has actually done the research on this, so I’m inclined to think that he’s getting closer to the truth than Gladwell is.
Everything is Obvious has blurbs from sociologists Eric Klinenberg and Dalton Conley, the aforementioned Ariely, some business dude named Guy Kawasaki, and . . . Alan Alda. Alan Alda?? I thought it was impressive when we got Nassim Taleb to blurb Red State Blue State. But Alan Alda in on the next level. What’s the three-degrees-of-separation there? My first guess is that Alda wrote a memoir and had the same literary agent that Watts has. But maybe there’s some other explanation. (I have no reason to doubt that Alda likes Duncan’s book; my question is how it got into Alda’s hands in the first place.)
That’s the overview. Now I’ll go through and give my thoughts as I read through the book.
Where had I heard that name before?
On the first page of the book, Watts quotes a stupid quote about sociology by “the physicist and science writer John Gribben.” Hmm, John Gribbin, that’s a somewhat unusual name. Where have I seen that before? Aaahhhhh, I remember. Many years ago, a John Gribben wrote a stupid book called The Jupiter Effect, which Martin Gardner took the trouble to trash. I did a quick web search, and indeed it is the same John Gribbin! (I think I encountered Gardner’s review when it was republished in one of his books.)
The Jupiter effect was that there was some year when lots of planets were on the same side of the sun, and this was supposed to cause all sorts of disruptions on earth. It didn’t, but, more to the point, the foolishness of this idea was clear ahead of time. If Gribbin had a Ph.D. in physics, then to write a book like the Jupiter Effect, I think he’d either have to be an idiot or unscrupulous (or some other possibilities, for example he was not an idiot but just extremely gullible, or maybe he was not unscrupulous, he just needed some money fast to pay for his grandmother’s lifesaving heart surgery, or whatever). At the very lease, if this guy had a Ph.D. in physics he would probably have some friends who knew a bit of astronomy and he could’ve talked with them first before trying to write on the topic.
Sure, people write all sorts of silly things but usually they have some sort of political or religious excuse for why it’s ok to believe them. Truth is not the only important value in life, there are also other concerns such as political convictions, religious beliefs, and the simple desire to avoid offending people. Thus, I could understand someone falsifying data in order to support a political conclusion—it’s not something I would imagine doing except in extreme cases, but I can see the moral rationale for it—or even for the purpose of public health. (Recall the Linus Pauling conspiracy theory, under which the great chemist made knowingly wrong claims about the health benefits of Vitamin C in order to give millions of people the benefit of the placebo effect.) And I could imagine someone saying silly things so as not to contradict an established religion—again, religion is an important part of many people’s lives, and who am I to say that scientific truth is more important than religious faith.
But . . . the Jupiter effect??? It’s hard for me to see any explanations other than ignorance or fraud. That Ph.D. in physics suggests that the answer is fraud.
How can we think about this? My analogy is to a businessperson who held up a liquor store when he was in his late twenties, never actually got arrested for the crime, and since then has gone into the iffy-mortgage-loan business. Everyone deserves a second chance and the guy obviously has a lot of energy, but I wouldn’t actually trust him on anything.
Many years, later, this guy is still publishing in large-circulation magazines. And continuing to be a fool (as indicated by the quote on the first page of Duncan’s book).
That’s as bad as if the Journal of the American Statistical Association were publishing articles by someone who was known to have published a false theorem. And if, to top things off, these JASA papers involved unsuitable statistical methods that use the data twice. Never would happen in statistics, I’m sure.
The albedo-obsessed billionaire
Duncan points out that social science is difficult and that physicists and other authority figures often don’t recognize the discoveries made by social scientists.
I’d like to add that this is not just a problem with social science! Recall the story of Nathan Myhrvold, the physics Ph.D. and Microsoft billionaire who likes to invoke albedo (the reflectivity of surfaces) whenever he gets stuck on a physics problem. A dubious claim about reflectivity of food in cooking transmuted into a flat-out wrong claim about the relevance of reflectivity of solar panels.
I think that a lot of the problem that Duncan notes, of people thinking that “everything is obvious,” arises from trust of (the wrong) authority figures. A center-right economist (Steven Levitt) and a left-wing journalist (John Lanchester) got faked out on albedo because they believed the word of a charismatic billionaire with a Ph.D. (As noted in the above link, I could’ve had a Ph.D. in physics too, and I certainly wouldn’t trust myself on a physics question. Trust me on statistics, don’t trust me on physics.)
Duncan discusses foolish fad-following in social science; it’s happening in other fields as well.
In chapter 1, Duncan talks about some of the unwritten rules of social encounters in the subway. I know what he’s talking about. The other day I was on the train and a woman sitting a few seats away from me flicked a candy wrapper on to the platform, just as the door was closing. Pretty rude, huh? I looked over at her, then glanced at another passenger who’d seen the event. The other passenger and I maintained brief eye contact and we made a can-you-believe-that? face to each other. That seemed about right. That little bit of littering didn’t seem quite worth anyone making a comment but it was worth a look. Another time I saw someone open up her gym bag on the subway—it was filled with Coke that had spilled from a big bottle. While the train was moving, bag-holder picked up the bag, poured its sticky liquid contents all over the floor, zipped up the bag, and then looked around apologetically—as if the apology were enough to cancel out this disgusting rat-friendly bit of spillage. Lots of other passengers appeared to be grossed out by this one but nobody said anything.
Duncan also talks about locking of doors. When I was in grad school and living with roommates in a half of a two-family house in Somerville, Massachusetts, we would leave the front door unlocked if any of us were home. Friends would sometimes just stop by–can you believe it? If the door was still unlocked at 10am or so, one of us would walk downstairs and lock it.
Contingent common wisdom and belief overkill
Duncan notes that many common sayings contradict each other. For example, The early bird catches the worm, but The early worm is eaten by the bird.
Later on, he mentions my work with Delia in which we found that different people have different combinations of political views: just about any combination of beliefs can feel coherent to you if you want them to cohere. Bob Jervis called this “belief overkill”; see here for a discussion by Jon Baron.
Rationality != selfishness
In discussing rational-choice models, Duncan writes: “What is so appealing about this way of thinking is its implication that all human behavior can be understood in terms of individuals’ attempts to satisfy their preferences.”
So far, no problem: after all, “satisfy their preferences” could always just be the tautology that people do what they want to do (and, as Duncan notes, rational choice models can easily devolve into tautology). But then he follows up with some examples: “when I vote, I choose the candidate I think will best serve my interests. . . . We have children when the benefits of a family . . . . outweigh the costs of increased responsibility, diminished freedom, and extra mouths to feed.” These examples remind me of the common error of conflating rationality with selfishness. I realize that Duncan would not make this error himself but I worry that a casual reader might not notice.
Theory and survey evidence suggest that people generally vote based on what they think is best for their community and their country, not based on what will serve their interests. And a big motivation for having children is to benefit the children themselves. Such motivations can fit into rational choice theory. (As I’ve written before, I think models of rationality can complement rather than compete with psychological or sociological explanations of behavior.)
My point here is just that if you’re not careful you can jump all too quickly from rationality to selfishness. It is perfectly possible to apply rational means to other-directed ends (or, for that matter, to be irrational in pursuit of selfish gains, as illustrated by the Human Highlight Film the other day.)
Two kinds of economists’ stories
Duncan writes that economists “illustrate the power of rational choice theory in a series of stories about initially puzzling behavior that, upon closer examination, turns out to be perfectly rational.”
That’s part of the story.
But I think the real power of pop-economics as a tool for explaining life is that it has two opposite forms of explanation:
1. People are rational and respond to incentives. Behavior that looks irrational is actually completely rational once you think like an economist.
2. People are irrational and they need economists, with their open minds, to show them how to be rational and efficient.
Argument 1 is associated with “why do they do that?” sorts of puzzles. Why do they charge so much for candy at the movie theater, why are airline ticket prices such a mess, why are people drug addicts, etc. The usual answer is that there’s some rational reason for what seems like silly or self-destructive behavior.
Argument 2 is associated with “we can do better” claims such as why we should fire 80% of public-schools teachers or Moneyball-style stories about how some clever entrepreneur has made a zillion dollars by exploiting some inefficiency in the market.
The trick is knowing whether you’re gonna get 1 or 2 above. They’re complete opposites!
It’s like Freudianism: if a person does X, that’s because of trauma Y that occurred early in life. But if the person does not do X, that’s also because of Y, it’s just that this time it’s repression. You can explain anything.
Theories that can explain anything are not necessarily useless. They can give understanding and point the way to further study. But it’s good to recognize ahead of time that the story could go in either direction.
Consequences for actions vs. consequences for outcomes
Duncan describes the story of a person who killed some people while driving a car under the influence of alcohol and points out the difficult in imposing punishment. On one hand, the driver presumably didn’t intend to kill that family, thus maybe the punishment shouldn’t be so harsh. On the other hand, there would seem to be no political will for generally imposing harsh penalties on dangerous driving in the vast majority of cases where nobody gets hurt.
In thinking about this sort of example, I would separate two concerns:
1. Is the killing intentional or not? The law already distinguishes between different varieties of murder and homicide, so it doesn’t seem so relevant to me that the weapon in this case is a car.
2. Discouraging dangerous driving. I recall seeing some statistics that a small percentage of drivers are doing most of the crimes, so I’d think it would be possible to get these repeat offenders off the road.
Duncan writes, “it seems grossly disproportionate to treat every otherwise, decent honest person who has ever had a few too many drinks and driven home as a criminal and a killer. Yet, aside from the trembling hand of fate, there is no difference between these two instances.”
I disagree. First, who’s to say that the driver in question is an “otherwise decent, honest person”? I don’t know the guy, but not everyone out there is decent. And, even if you’re driving drunk, it’s possible to account for that to some extent. I was once in a taxi in Chicago where the driver reeked of alcohol. I was too lazy to get out of the cab so I went all the way to the airport. I’ll say this about the cabbie: he drove really, really carefully. I don’t think it’s too much to ask of an otherwise decent, honest person that, if he does drive drunk, that he recognize he might be impaired, that he stop at every stop sign and every yellow and red light, and that he drives below the speed limit.
After all, it’s not like this dude hadn’t driven drunk before (see point 2 above).
So, yes, there are a lot of differences between the two instances. The distinction is statistical, but real; Conditional on running a red light and killing several people while driving drunk, I’d say that this person is (a) likely not to be completely decent and honest and (b) likely to be a repeat offender.
I agree that it would be better to catch the repeat offenders before they kill, but it doesn’t seem too unreasonable, from a statistical perspective, to take away their freedom afterward.
Just to be clear: I’m not saying that Duncan is trying to get dangerous drivers off the hook, and in his chapter he discusses sociological justifications for punishment. I’m just trying to connect this thought-provoking example to the statistical idea that a small fraction of the people commit most of the offenses.
It is an oft-noted paradox that freedom requires order, that a global dictator can allow local liberalism. For example, Duncan praises “Zara, the Spanish clothing retailer that has made business press headlines for over a decade with its novel approach to satisfying customer demand.” The company tries out lots of clothing lines each year and gathers data at retail sites to see what’s selling. Fine: it’s an example of the power of experimentation. Similarly, Duncan and his colleagues at Yahoo experiment with different ideas on their website to better satisfy their customers.
It’s easy to do this because Yahoo is a mini-dictatorship, that is, a “firm.” Here’s another example: the head of a casino company says, “There are two ways to get fired from Harrah’s: stealing from the company, or failing to include a proper control group in your business experiment.”
Duncan used to work at Columbia University (I know him from having occasionally coming to his amazing Friday afternoon seminars). Can you imagine the president of Columbia saying, “There are two ways to get fired from Columbia: molesting a student, or failing to include a proper control group in your teaching experiment.”
No, I didn’t think so. But the funny thing is, I’m pretty sure Columbia would be a better university if we were required to continually work on improving our teaching and if we were required to take careful measurements and use control groups and clearly-defined treatments. Columbia is a pretty free place, though, so the administration can’t make us formally experiment in our teaching (even in the unlikely event that they wanted us to). Paradoxically, the freedom at Columbia makes it more difficult for us to learn in the sort of bottom-up way associated with Zara, Yahoo, and Harrah’s.
Accuracy of different sorts of predictions
Duncan reports that, when predicting the winners of football games, a simple statistical model (based only on home-field advantage and the recent won-lost records of the teams) performs less than 0.1 percentage point worse than the predictions based on the Vegas odds. And the odds performed only 3 percentage points than the simple prediction based on home-field advantage alone!
I assume Duncan is measuring performance as % of games predicted correctly. He reports that home teams win 58% of the time, so I suppose this means that he Vegas favorites win 61% of the time.
This particular topic is very close to my current research on complex models for voting and public opinion, and I have a few thoughts.
1. Why predict the winner? It would be more informative to predict the score differential (as in chapter 1 of Bayesian Data Analysis). By predicting just the winner, you’re adding noise, and discarding some useful information in the point spread.
2. That 61% hides some potentially useful information: with some of the games you can be pretty confident in your predictions, not so much for others.
I agree that 3% isn’t much (see our recent discussion of the gene that predicts 1% of the variation in life satisfaction surveys); still, it might be that this particular error measure is understating the amount of additional information in those Vegas odds. This is something that we have to think about when we add interactions to our voting models and get no visible improvement in aggregate predictions.
A few other things:
– I freely admit to being a below-average driver.
– It’s not true that “every day in New York City five million people ride the subways.” There are five million rides each weekday. If everybody rides twice, that would be 2.5 million people riding the train.
– Chapter 2 discussed Eric Johnson and Dan Goldstein’s finding that people are much more likely to agree to organ donation if agreement is the default option: for example, in Germany only 12% of people agree to be organ donors, while the rate is 99.9% in Austria. But figure 1 of this paper by Kieran Healy reports a rate of actual cadaveric donation as 25 per million people in Austria and about 12 per million in Germany. A factor of 2 is big, to be sure, but not quite as large as you might expect from the numbers reported earlier. The actual numbers seem small too, but I don’t really have any sense of what to expect here.
– Duncan makes one of my favorite points about prediction markets, which is that if the stakes are small, participants can be motivated to manipulate them, but if the stakes are large, the outcomes themselves can be distorted (recall the 1919 World Series). This is one reason why, much as I enjoy betting, I don’t really go with the idea of bets as a foundation for probability theory, nor am I particularly sympathetic with arguments of the put-your-money-where-your-mouth-is variety.
– Duncan writes about how, when people are paid more, they respond with an increased sense of entitlement. That’s definitely true of me: no matter how much I get paid, I think I deserve a little bit more! The horrible thing is, even knowing that this is a generic feeling, I still feel that way.
– What would have happened if Duncan had written Yahoo instead of Yahoo! at various places in the book? I’m just curious. If I had to always write that I worked at Columbia! university, I think it would bother me after awhile. But maybe not, maybe I’d just get used to it.
I recommend the book—it was very thought-provoking in a very similar way to Taleb’s books, even though Watts and Taleb have much different writing styles and philosophical perspectives. Everything is Obvious is a serious book both for the author and the reader: Watts is putting together ideas from several major research projects he has done over the past ten years or so, and thinking about how they affect our understanding of the social world and the decisions we make, as individuals and organizations, in this world.
P.S. I’ll have to take back that characterization of Steven Levitt as “center-right,” having read that he thought that Barack Obama would be the greatest president in history.
Just to be fair to John Gribbin, he did repudiate his Jupiter Effect book in 1980 (before the time of the claimed effect). He has since written a large number of popular science books.
From the Wikipedia entry:
Writing a second book claiming that the Jupiter Effect triggered the Mount St. Helens eruption . . . that doesn't sound like much of a repudiation! As I said, none of us is perfect, we all make mistakes in judgment (in this case, I assume Gribbin's desire for quick cash outweighed any moral qualms about polluting the public discourse on science), but the whole story doesn't give me any reason to trust anything else the guy has written. Saying he's sorry is fine, that doesn't cost him any money.
What would have happened if Duncan had written Yahoo instead of Yahoo! at various places in the book?
Autocorrect, find and replace, or editors would have cleaned it up.
Alan Alda hosted Scientific American Frontiers on PBS, which sometimes covered these types of issues, so it is possible they crossed paths that way…
Professor Alda is now Visiting Professor for Communicating Science at Stony Brook University, so it's not hard to see how a book on popular science might land on his desk.
(but yes, he's also *that* Alan Alda)
Andrew, I encourage you (and others) to speak up when you see someone doing something antisocial like deliberately spilling a drink on the floor. Speaking up is how we get a culture in which people don't do this kind of thing; letting people do it without comment is how we get a culture in which they do.
As for Alan Alda: something about him rubs me the wrong way — "smarmy" is the word that comes to mind — but he plays a good "curious everyman" as the host of Scientific American Frontiers on PBS, and he seems genuinely curious, and interested in the stuff they cover. Of course, he's an actor, so maybe he's just doing a good job at pretending to be curious and interested. On the other hand, he's not a very good actor, so I think he wouldn't be able to fool me so consistently. So, maybe it's not so odd that he did a blurb for the book; perhaps he even met Watts on one of the shows.
I'm a big fan of "put your money where your mouth is" as a way of getting people to think seriously about a proposition. Someone will say something like "There's no chance that such-and-such will happen." If it's just hyperbole that's fine, I enjoy hyperbolic statements and wouldn't want to quash them. But if the person is making a serious claim, sometimes I'll ask them to quantify what they mean by "no chance" and they'll say "less than 1%" or something, to which, if I think they're radically underestimating, I'll reply "I'll take that bet: I'll put $10 on it." Almost always, they backpedal and come up with a new number that is often very different from 1%, maybe even 10% or 20%. Even when the topic is unimportant, I think this is a good way to get people to take statistical claims seriously. And every now and then it does lead to a fun wager.
I think you may have missed the point of the drunk driving discussion (or maybe you didn't miss it). Watts is saying that logically we should treat people the same for driving drunk whether they kill somebody or not, since (he supposes) it's only chance that separates the two cases. Suppose, for instance, that even though your drunk taxi driver was being very careful, he still killed somebody. Could happen: somebody stumbles from the sidewalk and sprawls right in front of the cab, or something. The driver would (Watts supposes) have been punished very severely. Instead, nothing happened to him at all…and even if he'd been caught driving drunk he presumably wouldn't have spent time in jail. Watts suggests that the difference between huge punishment and minimal punishment shouldn't hinge on whether some random thing (like someone stumbling into the street) happens or not.
I disagree with Watts and agree with you: it's OK to punish a person who accidentally kills someone a lot more heavily than someone who doesn't. But that's because I disagree with Watts' supposition that the only difference between a drunk driver who kills and one who doesn't is luck. As you point out, some drunk drivers drive really really carefully, so they're presumably a lot less likely to kill someone and thus less likely to be punished harshly. We don't want people to drive drunk, but if they do it's good to have an incentive for them to be careful! If they're going to be punished the same whether they are careful or not, maybe they won't be careful! But although I agree with you, it still seems to me that you may have missed Watts' point.
"- What would have happened if Duncan had written Yahoo instead of Yahoo! at various places in the book? I'm just curious. If I had to always write that I worked at Columbia! university, I think it would bother me after awhile. But maybe not, maybe I'd just get used to it."
This is the best part of the post.
"But figure 1 of this paper by Kieran Healy reports a rate of actual cadaveric donation as 25 per million people in Austria and about 12 per million in Germany"
Why would you look there when it's right in Johnson and Goldstein's Figure 3 (p 1339)?
Do defaults save lives?
It shows the average rates for all opt-out and opt-in countries studied in two independent investigations.
In that case, I'm glad you read all the way to the end to get to the good part!
Thanks for the reference. I agree that your paper shows similar results. As you know, I'm a huge fan of defaults and I love this work. I just wanted to point to that next step, from the 99.9% to the actual rate of donations.
I have failed to read a Kawasaki book; but from skimming at the local Borders I gather he is one of those people that is hot on the trail of ideas such as: why don't we slice the bread before we sell it? Why don't we fold the toilet paper into a pleasant triangle? Important, if seeming trite, ideas.
What should I have said to the litterer and the Coke dumper? I understand the value of speaking up but am not quite sure what to say in this situation.
Once on the street I saw some young men behaving rudely and said: If your mothers could see you now they'd be very disappointed! They seemed a bit surprised by this bit of retro-scolding.
Andrew, you ask what I think you should have said to the Coke dumper. (What if you had to wrote Columbia University (TM) every time…."). I don't know, but I'm not sure it matters very much as far as obtaining the desired result, which is to send a signal to them (and everyone else in the train car, for that matter) that that sort of behavior is unacceptable.
For me and perhaps for you too, there's a tendency to not want to have an unpleasant interaction, which might suggest that it would be good to be polite: "I hate to interrupt what you're doing, but the rest of us would really appreciate it if you wouldn't spill your Coke on the floor." I think that would be better than nothing, but not really very good. Better is to go ahead and let the other person experience it as an unpleasant interaction: "Hey, what the hell are you doing? You don't just dump your f**king Coke on the floor, you a**hole!" I mean, they are in fact being a jerk, it's OK to be harsh to them! The goal, as I see it, is not to have them apologize right then and see the error of their ways — they'll probably just give you the finger or something no matter what you do — it's to get them to realize that it is culturally unacceptable for them to behave that way.
Just twenty years ago, nobody picked up after their dog, and people would light cigarettes indoors in public places without thinking twice about it. Now, those things just aren't done. Well, indoor smoking is still OK in some parts of the country, but not in lots of them. The main thing that has changed is the sense of what is or isn't acceptable.
You remember that famous ad that shows an American Indian looking over a landscape strewn with soda cans and garbage, with a tear creeping down his face? A colleague here claims that that ad was counterproductive. It's famous, won all kinds of awards, and was objectively one of the most memorable ads of its day…but my colleague claims that it sent a strong message that "here in America, most people litter." As you know, many experiments (such as the now-famous one that looked at what message gets more hotel guests to reuse their towels on Day 2) find that people's beliefs about how other people behave are much more perceptive than requests based on moral or ethical arguments.
So, tell the Coke-spiller s/he is being a jerk by acting in this way that no normal person would. It doesn't really matter how you convey that message, I think.
Oops, I meant much more effective, not much more perceptive, at the end of the penultimate paragraph.
Quick translation for you Adnrew. In Australia we use der instead of duh
Note that the coke wasn't exactly being dumped from a can.
Wanton littering is different from pragmatic damage minimization. If it was a metal train floor and I had, say, an expensive laptop in the bag, I'd condone the gesture.
The link to Alan Alda is probably through his former supervisor, Steven Strogatz. There is a long description in Strogatz' book Sync about a meeting with Alda.
Rahul, I know the Coke wasn't being dumped from a can.
Of course there are circumstances in which dumping Coke on the floor is acceptable (in fact, there are circumstances in which just about anything is acceptable). In absence of those circumstances, dumping soda on the floor is not acceptable.
I'm sure we agree.
I have a 1998 book by Gribbin, The Search for Superstrings, Symmetry, and the Theory of Everything. It is a conventional explanation of mainstream physics, and he explains it quite well. Superstrings are highly speculative, and almost certainly wrong, but he describes what many reputable physicists. It is a good book, even if superstrings are wrong. So I say Gribbin is not a crackpot. On the other hand, I have been duped by fallacious arguments by Gladwell and Levitt. I have quit reading them, because I cannot trust what they say.
My guess is that Gribbin is not a crank, he knew all along that the Jupiter effect was b.s., but he saw a payday in writing that book. Then at some point later on he decided to go legit as a science journalist. That's fine–lots of people have shady pasts–but it doesn't incline to lead me to trust him. Especially given his foolish statement on social science which Watts quoted.
I don't think Myhrvold is a crank either; my guess is that he surrounds himself with yes-men and so he thinks that every clever idea he comes up with is actually true. I make lots of mistakes too, but I work in an environment in which people are encouraged to question me, not to just go around saying how brilliant I am.
With hard work and a good, helpful superviser, its possible to get a physics PhD without being very good at physics. I think John Gribbin may be such a case. Having said that, I am sure he has other talents which might make him good at writing about physics in an engaging way for laymen.