“The British amateur who debunked the mathematics of happiness”

Andrew Anthony tells the excellent story of how Nick Brown, Alan Sokal, and Harris Friedman shot down some particularly silly work in psychology. (“According to the graph, it all came down to a specific ratio of positive emotions to negative emotions. If your ratio was greater than 2.9013 positive emotions to 1 negative emotion you were flourishing in life. If your ratio was less than that number you were languishing.” And, yes, the work they were shooting down really is that bad.)

If you want to see what the fuss is about, just google “2.9013.” Here’s an example (from 2012) of an uncritical reporting of the claim, here’s another one from 2010, here’s one from 2011 . . . well, you get the idea.

And here’s a quick summary posted by Rolf Zwaan after Brown et al. came out with their paper.

I know Sokal and Brown and so this story was not news to me. I didn’t post anything about it on this blog because it seemed like it was getting enough coverage elsewhere. I think Nick Brown did a good job tracking this one down, and I don’t envy them the effort it took for them to publish their criticism in the American Psychologist. I don’t typically have the patience for this sort of thing (for example, I recently submitted a methodological criticism of a paper to the American Sociological Review, but then when they rejected it (not because they said I was wrong but because they said they only have the space to publish a small fraction of their submissions, and I don’t think corrections of previously published papers in their journal get any sort of priority), I just wrote up the story in Chance, which is fine, Chance is great, but nobody reads Chance. And of course I did not attempt to publish a letter in Psychological Science for each of their flawed papers (that would be a lot of letters!), nor did I bother writing a letter to PNAS regarding that horrible, horrible cubic polynomial fit leading to the implausible claim that a particular sort of air pollution is causing 500 million Chinese people to lose an average of five years of life.

In that sense, what’s really special about Brown, Sokal, and Friedman is not just that they noticed a really bad but popular paper and demonstrated its flaws (although they did a good job on that), but that they went to the effort to get their criticism published in a high-profile journal, rather than just doing a drive-by on a blog. At the point where others of us might have said, Ahhhh, chuck it, it’s not worth the effort, they went to work.

I was bothered by a headline

But that’s not why I wrote this post. The reason is that I was bothered by the subheadline, “The astonishing story of Nick Brown, the British man who began a part-time psychology course in his 50s – and ended up taking on America’s academic establishment.” Or, to be precise, I was bothered by the last two words: “academic establishment.” I have no doubt that the author of the debunked work is indeed an establishment figure in American academia; the news article (see link at top of this post) makes that clear enough. But I’m also pretty sure that (a) most of America’s academic establishment had never heard of this work, and (b) many of the establishment who had heard of it, thought it was silly. After all, I’m part of America’s academic establishment too! So, while I liked Andrew Anthony’s reporting on this one, I thought the subheadline was misleading. I don’t think Brown et al. bucked the establishment. I think they found a bad paper and then put in a big pile of effort to not just refute it, but to get their refutation published in a visible place. It’s not so much that they needed to fight the establishment; it’s more that they had to swim through the molasses which is the conventional attitude that presumes that a scientific claim is correct, just because it has been published in a respected journal.

It probably seems silly to make such a fuss over a headline, but I do think my distinction here is important. Here’s why. Consider two possible stories:

1. Whistleblower fights a heroic battle with bad guys who are trying to destroy his career.

2. Truthteller struggles to get people to accept the discomfort of publicly reassessing a published claim.

Story #1 does happen sometime (see here, for example), and of course people who break the rules have a notorious pattern of not admitting it even after getting caught, but this more recent case seems more like an example of story #2. I doubt that a lot of researchers in psychology actually believed that “2.9013” crap; it was just easier to let it sit there than to confront it.

Similarly, I doubt many researchers actually believe the claim about the air pollution causing 5 years of life being lost in half of China, but I think it would be difficult for me to publish my criticism of the claim in as high-visibility a place as where the original article appeared. Heck, people even had trouble publishing non-replications of Bem’s ESP study. The attitude is that, once an article appears in print, the standards for shooting it down are very high. At one level, I can understand this—the original article requires substantive research, while the correction can often be done just using methodological criticism alone—but the result is that lots of stuff that shouldn’t really be believed sits around in the literature.

That is a story worth telling, and that’s why I wanted to emphasize that the tale of Nick Brown, while it does feature some colorful characters (including the inventor of the notorious 2.9013), really fits into the category of: “Truthteller struggles to get people to accept the discomfort of publicly reassessing a published claim.”

It’s not one man against the establishment, it’s an establishment that prefers to inconspicuously bury its failures rather than drawing attention to them, perhaps on the theory that errors will eventually disappear without any special effort (in the words of replication-backlasher Mina Bissell, “non-valid findings will eventually be weeded out naturally when other careful scientists repeatedly fail to reproduce them”). That may be so, but I appreciate the work of Nick Brown and others who speed this process along.

P.S. In an amusing twist, John Gottman (who we’ve encountered in this blog before) has a cameo role in the story—as a skeptic!

8 thoughts on ““The British amateur who debunked the mathematics of happiness”

  1. The newspaper in question (The Observer) is the major liberal (in the US sense) Sunday paper in the UK. It’s now owned by the same company as the better-known Guardian, which for example broke the Edward Snowden story. I suspect that may explain why both “establishment” and “American” figure as adjectives to describe the perceived collective bad guys in the story.

    To be fair to Andrew Anthony, though, he didn’t take that sort of line with me when interviewing me (if you look him up on Amazon, you’ll see that his bleeding-heart days seem to be behind him), and in my experience, sub-editors — who write the headlines and other blurb in place of the author of the article — tend not always to fully appreciate the subtleties of every story.

    All the credit for getting our article published in American Psychologist should go to my coauthors: Harris Friedman for getting the first draft sufficiently below the snarkiness threshold that the editors of AP at least read it all the way through, and Alan Sokal for, first, insisting that we offer “first refusal” to AP, and second, for leading the dragging of the masterful letter that caused them to change their minds in about three hours after they had initially rejected our article for being an out-of-time (by seven years) “comment”.

  2. Perhaps a better byline might have read “ended up taking on the status quo”. Their behavior was off the equilibrium path, at least for an untenured academic.

    What I don’t get is why tenured faculty don’t do more of these things. They are the ones with the power to change the status quo. Selection, norms, other incentives?

    I suspect in their case it’s not “mortgages” or other financial interests keeping them from confronting the status quo.

    • Anon:

      Speaking as a tenured faculty member who does do this sort of thing sometimes: There’s a big incentive simply not to bother to try to place that correction in a top journal. As noted above, it’s a lot of effort. Again, see here for an example story. It just doesn’t seem like the best use of my time to fight with journal editors to publish a correction of a paper that, ultimately, I don’t care that much about.

      • I understand but that suggests social science is inconsequential. Also, faculty could come up with other solutions (online journal of corrections or whatever?) not just beat editors. If science is broken, fix it.

  3. Pingback: 20/01/2014 | Magapsine

  4. Pingback: Where to debunk (political) science findings?

Comments are closed.