Should Jonah Lehrer be a junior Gladwell? Does he have any other options?

Remember Jonah Lehrer—that science writer from a few years back whose reputation was tarnished after some plagiarism and fabrication scandals? He’s been blogging—on science!

And he’s on to some of the usual suspects: Ellen Langer’s mindfulness (see here for the skeptical take) and—hey—“an important new paper [by] Kyla Haimovitz and Carol Dweck” (see here for background).

Also a post, “Can Tetris Help Us Cope With Traumatic Memories?” and another on the recently-questioned “ego-depletion effect.”

And lots more along these lines. It’s This Week in Psychological Science in blog form. Nothing yet on himmicanes or power pose, but just give him time.

Say what you want about Malcolm Gladwell, he at least tries to weigh the evidence and he’s willing to be skeptical. Lehrer goes all-in, every time.

It’s funny: they say you can’t scam a scammer, but it looks like Lehrer’s getting scammed, over and over again. This guy seems to be the perfect patsy for every Psychological Science press release that’s ever existed.

But how could he be so clueless? Perhaps he’s auditioning for the role of press agent: he’d like to write an inspirational “business book” with someone who does one of these experiments, so he’s getting into practice and promoting himself by writing these little posts. He’d rather write them as New York Times magazine articles or whatever but that path is (currently) blocked for him so he’s putting himself out there as best he can. From this perspective, Lehrer has every incentive to believe all these Psychological Science-style claims. It’s not that he’s made the affirmative decision to believe, it’s more that he gently turns off the critical part of his brain, the same way that a sports reporter might only focus on the good things about the home team.

Lehrer’s in a tough spot, though, as he doesn’t that much going for him. He’s a smooth writer, but there are other smooth writers out there. He can talk science, but he can’t get to any depth. He used to have excellent media contacts, but I suspect he’s burned most of those bridges. And there’s a lot of competition out there, lots of great science writers. So he’s in a tough spot. He might have to go out and get a real job at some point.

39 thoughts on “Should Jonah Lehrer be a junior Gladwell? Does he have any other options?

  1. Story time. I was spending a summer in NYC about 5 years ago, and had a little fling with a student at Columbia. One day, we were walking around Central Park talking about emotions or some such thing, and she recommended a book by Lehrer. So I made a mental note and later on googled the guy, who apparently had graduated from Columbia and then immediately went into journalism. Hmmm, kinda skeptical, but I decided to give it a chance. So I started listening to one of his talks, and he kicked off with this story about how he was in the supermarket in the middle of the night trying to decide between a box of froot loops and frosted flakes or something, and had this epiphany that he needed to write a book about the secrets of the neurobiology of decision-making, or some garbage like that. I’m probably embellishing a bit, but the point is that it didn’t take long to conclude that the guy was full of it. So the next time I saw my friend, I asked her if she really read that book, because that Lehrer guy is totally full of crap. She said yes and that he’s a great writer and he’s totally qualified because he graduated from Columbia and worked with Kandel, and that I must be jealous or plain clueless to be calling out this guy who worked with a Nobel laureate.

    I had never felt so vindicated as when, a few years later, I heard that Lehrer was exposed, though I had lost contact with my friend by that point. I don’t know what the moral of this story is… But I think it’s interesting that people, even scientists (my friend was in the hard sciences), are blinded by irrelevant information rather than judging a work for what it is (dilution effect?). I don’t know if any amount of effort will prevent susceptibility to terms like “Harvard study”, “Nobel laureate”, “[Ivy league] graduate”, etc, as long as the information therein doesn’t offend peoples’ intuition or sensibilities in any serious way. And Lehrer seems to operate in that sweet-spot: buzz words on his CV that are seemingly informative of his qualifications, along with a cute and easy-going writing style that gives the illusion of understanding something beyond the surface level without ever really going deep enough to make people think… because that takes effort, and if people wanted to put effort to try and understand something, they wouldn’t be reading Lehrer to begin with…

  2. Even in his supposed area of expertise he was posting questionable science writing on his blog years before that plagiarism scandal. He’s either incapable of detail oriented thinking for whatever reasons or he slipped through the cracks on the science side of his education.

  3. Thanks for reading my little blog. I’m a long-time fan of yours and I appreciate the critical comments. However, I would disagree that I’m “getting scammed, over and over again” by the peer-reviewed literature. When I choose which papers to write about, I try to focus on papers that build on the literature (or complicate it) in interesting ways. In other words, I’m less interested in new effects than in updates and revisions to old ones. Take that ego-depletion blog you link to. The reason I wanted to write about that paper is because it explicitly addressed a longstanding contradiction in the self-control literature, in which performing two acts of self-control at the same time (such as having to pee and suppressing an emotion) led to the opposite of ego-depletion. (The ego gets stronger, not weaker.) TLDR: the paper provided evidence that ego-depletion theory might not even apply in many situations requiring self-control. (In an earlier blog, I wrote about a Dweck paper that more explicitly addressed the limitations of ego-depletion theory: http://www.jonahlehrer.com/blog/2015/5/4/does-the-science-of-self-control-diminish-our-self-control)

    The same principle applies to the other blogs you link to. There is a large and persuasive (at least to me) body of research on memory reconsolidation and its potential role in helping people deal with traumatic memories. The Tetris paper I cite helped clarify exactly when the therapeutic effect existed: it turned out the temporal window was extremely small.

    Of course, we have a larger disagreement here. Although I’ve written before about the serious statistical problems affecting science – I got plenty of flak for an old piece on the decline effect in the New Yorker – I don’t think these problems mean that every social science paper is a hopeless mess, or that recent papers in psychological science aren’t worth covering. Quite the opposite: I remain very interested in how the research, which I believe greatly benefits from informed critics, continues to evolve, especially on subjects I’m interested in. All of the ideas I write about are provisional and subject to constant revision. That doesn’t mean they have no value. I wish I was a better skeptic and that I had a firmer grasp of statistical modeling and its misuses. But that’s why I read you.

    As for my motives…The main reasons I write my blog are 1) it forces me to read journals 2) it gives me an excuse to interact with scientists and 3) I need the practice.

    • Ugh. That JPSP paper by Dweck and colleagues is a hot mess involving a laundry list of commonly encountered problematic practices such as a misunderstanding of factor analysis, inappropriate use of difference scores, the incorrect interpretation of the difference between significant and non-significant (that Andrew has written so much about), and an apparent confusion of GPA and GPA-residuals.

    • “Hence I am posting this on our blog, where anyone has an opportunity to respond. That’s right, anyone. [Jonah Lehrer] can respond, and so can anyone else… This is open discussion…” – Andrew Gelman*

      Just wanted to say that, even though I don’t care much about this topic, it is nice to see you come here and defend your work in a reasonable and logical manner that adheres to the best norms of scientific discussion. You say what you believe, give evidence for believing it, and simultaneously recognize legitimate methodological criticisms and the intellectual honesty of your critics. I wish there was more of that in academia and blogging.

      Don’t get me wrong, you may well get legitimately savaged here for writing “The Tetris paper I cite helped clarify exactly when the therapeutic effect existed: it turned out the temporal window was extremely small.” One of the major problems in all of empirical social science right now is believing you can get to this kind of certainty from looking at the results of small-N experiments with treatments that are likely to generate wildly heterogeneous effects (and often with noisy, proxy-measurements for the outcomes we are really interested in). To say that this experiment can “clarify exactly when the therapeutic effect existed” is essentially to perpetuate the exact problem we’ve been discussing here the last year or so.

      But like I said, you came here to logically and honestly defend a point of view many of us (vocally) disagree with, and I think that is admirable. I also think you might benefit from spending a little more time stepping back from individual studies that seem like they build on previous results, and thinking about the epistemology at work behind this network of inter-related statistical significance findings that constitutes the “empirical knowledge” you are relying on for understanding the world. One of the most terrifying logical results of the Garden of Forking Paths critique is that you can get an entire branch of social science research that looks like it is constantly confirming itself but is actually just chasing noise all over the map and then contorting the (always under-identified, in Quine-ian sense) underlying theory to fit the new “results”.

      * http://statmodeling.stat.columbia.edu/2016/09/21/what-has-happened-down-here-is-the-winds-have-changed/

      • >’you can get an entire branch of social science research that looks like it is constantly confirming itself but is actually just chasing noise all over the map and then contorting the (always under-identified, in Quine-ian sense) underlying theory to fit the new “results”.’

        Why do you limit it to “social science research”? Any research that measures progress via NHST will eventually end up that way.

        It really is simple once you are willing to accept how bad the current situation is. For it to be productive science, it is key that you figure out *precise* predictions from a theory, and compare those to observation/measurement. If you don’t see that happening, most likely you are looking at a bunch of people wasting each others time.

    • Jonah, you disagree with the claim about getting scammed over and over again and I wonder if you could comment a little about how you’ve updated some of your beliefs about psychological phenomena. For example, you have spoken and written a lot about the importance of grit.
      https://www.wired.com/2011/03/what-is-success-true-grit/
      http://archive.boston.com/bostonglobe/ideas/articles/2009/08/02/the_truth_about_grit/
      http://www.businessinsider.com/jonah-lehrer-on-creativity-and-what-makes-people-successful-2012-7

      It now turns out that grit appears to largely be a recycling of conscientiousness, that it is largely unrelated to success and performance and that Angela Duckworth made some pretty basic statistical errors in her original papers. You have, in the past, had to good fortune to wield a very powerful microphone when it comes to describing supposedly scientific findings to the general public. Do you feel that Duckworth misled you (and the public)? Do you have any responsibility to correct your earlier writings on this topic?

    • Jonah:

      Thanks for replying. My post wasn’t particularly polite and I appreciate that you moved beyond the tone to address substance.

      So, on the substance (or maybe I should say, the statistics):

      You write of “a longstanding contradiction in the self-control literature, in which performing two acts of self-control at the same time (such as having to pee and suppressing an emotion) led to the opposite of ego-depletion.” I have not read this literature so I’m only commenting in general terms here, but in general terms I think a lot of apparent contradictions in research literatures are not contradictions at all, they’re just the result of a combination of random variation and problems with research methods. For example, Carney et al. reported power pose having a large and statistically significant effect on hormones, and Ranehill et al. reported no evidence of an effect. This might seem to be a contradiction in the literature—but once you learn that the Carney et al. study was rife with uncontrolled researcher degrees of freedom, the apparent contradiction disappears. Speaking more generally, when you have a literature of strong claims based on noisy data supported by statistical significance that was enabled by the garden of forking paths . . . when that happens, the literature will naturally look like it’s full of contradictions. It can be fun to come up with stories explaining these data patterns (hence much of the career of Satoshi Kanazawa), but this is far from empirical science.

      Also you write, “I don’t think these problems mean that every social science paper is a hopeless mess, or that recent papers in psychological science aren’t worth covering.”

      Just to clarify: Like you, I don’t think that every social science paper is a hopeless mess, and, like you, I also think there are recent papers in psychological science that are worth covering. But I do think that, for better or for worse, it’s not enough for a paper to (a) be published in a reputable journal, (b) have statistically significant results, and (c) have a good story. The recent “air rage” paper, for example, had all three of these features but I still don’t think it was worth covering (except as a cautionary tale of science reporting).

      So, yes, I think lots of cool stuff is being done in the social and behavioral sciences. And, ultimately, I think that the real stuff is more interesting than power pose, embodied cognition, ESP, and other well-publicized dead ends of recent years. I’m not slamming people for having studied such topics—we learn from research failures as well as from successes—but, to move on, we have to recognize the failures. And, as a statistician, I’ve put a lot of thought into understanding how the failures happened and how we can catch them sooner in the future.

  4. Glad to see that @jonahlehrer has responded.

    Here’s confirmation for the retrodiction/guess that JL MIGHT write *an inspirational “business book” with someone who does … these experiments*

    That book, *The Smarter Screen* by Shlomo Benartzi (author) & JL (contributor), was published one year ago (10/2015)

  5. Just thinking aloud: I like Andrew’s blog and so I like to agree with him. And I like a story about another clueless science writer, it reinforces the sense of having a mission in this world as well as the sense of community with a common target here on this blog.

    But then I realize I know nothing about Jonah Lehrer. Although I trust Andrew’s statistical judgement, I have seen that he can be quite abrasive, sometimes also rash in his judgements. So the uncertainty is large and all the possibilities remain open. Maybe Andrew’s take is complete crap, based on a superficial glance which seemed to confirm his prejudice? Maybe he’s being irresponsible?

    I find I’m still too often accepting thoughtlessly the judgements of people I like or whose writings I like. It’s a danger. We’re social beings and everything we do is social to some extent. We’re constantly being tossed around by social cues and social emotions. We want to belong, sometimes more than we want to know. We’re not good at catching this. We constantly need to remind ourselves of our limited knowledge.

    • Alex – I agree with all of your points – and it depresses me greatly. I have reached the point that I trust nobody – at least, without them earning my trust. This sort of works on an individual level, but my main vocation is teaching, and I am at a loss for what I should teach students. I can teach them to trust no analysis without having done it themselves (and, of course, that comes with plenty of caveats) or without extensive reputational evidence (not simply where they got their degree, how many pubs they have, where they teach, etc.). I can justify that I am teaching them to be good informed people. But the reality is that they cannot operate without trusting something and/or someone. The complexity of the world, the intentional and/or accidental mis-steps, and our growing awareness of the shortfalls of traditional measures of reputation are all conspiring to make it an impossible task.

      It’s no wonder that algorithms outperform humans – or soon will, in most aspects of life. That comes with its own set of problems (science fiction is probably not far off from what we should worry about).

      Making it more concrete: like you, I don’t know Jonah Lehrer or his work. And I don’t have the time to figure out whether I trust him. I trust Andrew’s work and reporting, but also know he has limited time and may be subject to some of the forces you mention. I also put some value on the fact that Jonah has responded to the post. I even looked at his post on the recent working paper he cited – and his post seemed quite reasonable to me. I’d have to do a fair bit of work to really evaluate what he says about that work, including judging the work itself. Naturally, I want to see the data for myself, but that will not be available. And, even if I had the inspiration to pursue this one item in detail, there are hordes of others that I cannot devote the time to. So, who am I to trust?

      I can answer that – but I suspect that my answer would be just as problematic. I can apply the same dichotomous decision making that Andrew rails against with NHST – person A is to be trusted, B is not. I think that is probably just as bad as NHST.

  6. File under: “sick Gelman burn.”

    Although seeing Lehrer in the comments makes me feel bad for the guy. I wonder if Gelman gets those feelings of “Aw man, maybe I came down too hard on someone who looks up to me.” Or maybe he knows you have to crucify people for the greater good.

  7. Update: I went back to Jonah/s blog and saw his report on a recent paper by Catalini, Fons-Rosen, and Gaule, “Did Cheaper Flights Change the Direction of Science?” He describes their study – much like many of the other studies Andrew mentions in his post – sensational. I downloaded their paper and it is a “monster.” A huge amount of data and complex analysis. Here’s the punchline: “scientist-pairs increasing their pairwise collaborations by approximately 50%.” That effect is due to the entry of Southwest airlines into a city pair route. I find the size of the effect fairly incredible! Even more incredible is that it appears that the effect takes place within a year, although it does increase with time after SW enters a route. I didn’t even know you could get a paper published within a year.

    This paper is yet another example of the issues. It is complex – trying to replicate it, even if I had the data – which is not available – would take me weeks of work. It is not a simple paper at all. The effect seems incredible. It isn’t just a matter of statistical significance – the effects seem to be large – too large for me to believe. (I also didn’t realize that chemists were so sensitive to air travel costs – their grants must have good cost-reduction incentives built in).

    So, does Jonah’s highlighting this paper make matters better or worse? Is this just another case of journalistic malpractice? Do I trust the results of this paper? Do I trust Jonah? If Andrew (or some other of the worthy contributors to this blog) sought to analyze this paper in detail and found it wanting, do I trust that result? I’m at a loss.

    (I will say, despite a lot of complicated and subtle analysis in the paper, one thing they do not appear to have done is compare the collaborative publications with single authored publications to see if the latter also increased – I would think such a check would have been relatively easy and revealing.)

    • Dale:

      Pleading hardship is no defense in science ;-)

      One should not _have to_ trust anyone in science but can only distrust at most a few at a time.

      Now, anyone trying to make sense of the published literature that Jonah apparently is attempting to has a herculean task – having to discern varying signals mixed in with noise and indirect but wishful pattern seeking. It does seem hopeless and any emergent interpretations/explanations are unlikely to be well connected to reality.

      I agree with Andrew that not all the studies are hopeless but I am not sure I would agree with him that those can be identified and made sense of in isolation.

      In an area I am more familiar with – clinical research – I do not think it is possible from just the published papers. Without some insider information (such as available to regulatory agencies) its too hard to discern what was hidden (not published), not reported on (selective reporting) or miss-reported on (i.e. blinding in the trail was not actually implemented as planned). Unfortunately, it just takes a subset of studies to have these problems (that you can’t identify) for the “well to be poisoned”.

      I also don’t doubt there are well meaning people that don’t understand this and so try very hard to make sense of the published literature and feel justified in disseminating their interpretations.

  8. I’ll be a contrarian and say that I don’t think journalists should try to pick apart and critique research papers. They don’t have the training, and it’s not their job. It would be insulting, in fact. What next–Tom Clancy critiquing Supreme Court decisions? If the top journal in psychology says that XYZ is a new and interesting finding, why shouldn’t journalists report it? The fault lies with the authors, the journal editors, and the profession. I don’t think it’s fair to say a journalist is getting scammed over and over. Statistical mistakes that are obvious to researchers (and not even researchers!) cannot be obvious to journalists.

    *However*, I agree fully with Prof. Gelman that science journalists should be skeptical, and report not just the finding, but dissident views too, even just calling someone else in the same field who is not a co-author to get their take on the paper.

    • But if you are going to breathlessly hype a particular idea to the public and that idea then pretty much gets debunked you surely have some journalistic responsibility to also walk back the hype.

      • OK, you’re talking about corrections/retractions. Yes absolutely. And this is where perhaps journalists are not sure what to do, because a retraction is straightforward: claim X turned out to be false, so we are correcting the record. With statistics, it is not exactly false, but rather *probably* false, or at least much less convincing than the hype would have you believe. I’m not sure journalists are trained to deal with this sort of revisions/updates. Is there a journalist in the room?

        I have a rule of thumb (not mine exactly): the bigger the claim, the less likely it is to be true. It works well. And maybe journalists should follow a similar rule of thumb and apply more scrutiny to bigger claims.

    • “I don’t think journalists should try to pick apart and critique research papers. They don’t have the training, ”

      Unfortunately, a lot of researchers don’t have the training either. Maybe what we need is journalists with the training. Hmm, maybe there ought to be joint statistics/journalism programs in universities to produce such people?

      “I agree fully with Prof. Gelman that science journalists should be skeptical, and report not just the finding, but dissident views too, even just calling someone else in the same field who is not a co-author to get their take on the paper.”

      +1

      • Isn’t there some common-sense skepticism that can jumpstart this process? I don’t mean that common-sense skepticism is sufficient, just that it can help get things going.

        For instance, there’s every reason to distrust the idea that “grit” separates the successful from the unsuccessful. Sure, persistence is good–but we persist at things not because we’re “gritty” but (in many cases) because the subjects hold interest for us and we’re rather good at them. Building “grit” in schools with hopes of increasing achievement is simply silly. Better to help students build a foundation in the subjects so that they can go deeper into them on their own.

        Now, that’s different from saying there are problems with studies of “grit.” Still, it can provoke a closer look at such studies.

        • Relatedly – Ever notice how graduate students seem to un-learn how to think on their own in the first year? It’s like, you fill someone up with sufficiently complicated “theory” and all of a sudden they stop trusting their own critical thinking abilities. Stuff they would’ve nailed before grad school.

          I felt this way after my first year. Took at least a year to get my full ability to trust my own thinking back. But I was lucky, in that I’m generally incapable of deference, and that helped me along.

          It would be nice if we could end up teaching these smart, ambitious, eager young people to continue trusting their own judgement even while learning the value of the theoretical and statistical toolkits relevant to their fields. One thing that would probably go some way towards regaining some “common-sense skepticism” among future researchers is simply reminding our graduate students the value of their common sense and forcing them to use it critically.

  9. Great to see Jonah up and running! I quickly resented his success years ago when he wrote about psychology as I could have done (but chose not to), but that was just a competitive thing. I hugely resented the fox-hunting that started when he simply recycled what he’d been blogging on, to perhaps a different audience. Wow! People writing the same thing more than once? What scientist doesn’t repeat themself in their papers?!

    And if simply passing on what others have written, perhaps without any critical consideration, is a disgrace, then Carl Zimmer and Ed Yong, not to mention the BBC science department, should expect a missive from Andrew every week!

    We need people like Andrew to apply their insight, their mover-and-shaker skills, and even their viciousness to targets that have lived too long, but let’s hope fox-hunting Jonah isn’t going to distract us from addressing pseudo-science in the form of peer review, groupism, ignorance of the nature of science, and that other thing… oh yes – bad stats. (Any chance of the burning glass being turned to Pinker any time soon?!)

    But remember, all these things can be fixed if bad scientists started rethinking their work as soon as faults were pointed out instead of when they can’t resist any longer. It is not up to critics to have to work hard to persuade others of their errors, it is up to scientists to work hard to understand their critics.

  10. I don’t find this post constructive, for either the field of statistics or social science.

    I would encourage you to consider the strategy of Data Colada, with blog authors reaching out to relevant parties for comment before posting. Their strategy results in posts that are kinder, more nuanced and more productive.

    • Alex:

      I too think Uri Simonsohn and his colleagues are great, and I’m glad they write their blog! I don’t think they and I are in competition for who is more productive; rather, I think we’re all adding value in different ways at different times, and it’s great that there’s room in the blogosphere for all of us.

  11. Alex: As both Martha (Smith) and I pointed out on the “Some people are so easy to contact and some people aren’t” thread, the usual response to “reaching out to relevant parties” is no response at all. I wouldn’t blame Andrew at all for not attempting to do so.

    • My approach is usually to write the post first and then contact the subject of my criticisms afterwards. But the “some people are easy to contact” point certainly holds, and if the person I’m criticizing is very famous I often don’t bother out of a kind of humility.

  12. > But how could he be so clueless? Perhaps he’s auditioning for the role of press agent: he’d like to write an inspirational “business book” with someone who does one of these experiments, so he’s getting into practice and promoting himself by writing these little posts.

    From Matt Taibbi, “Journalist Amy Goodman Shouldn’t Be Arrested for Covering Dakota Pipeline Story” –

    “The journalism business is designed to make telling the truth difficult. There are a lot of obstacles. In return for access to high-ranking politicians, the government typically charges a little bit of your honesty. In return for the large sums of money advertisers pay to major network news operations, you have to give up a little bit more. Then there’s audience. In order to secure a big one, you sometimes need to give up still a little bit more of your soul. The easiest route to a big audience is a commercial network operation that piggybacks on the popularity of its other programming, like sports and sitcoms. Go that route and you are beholden not only to your own advertisers, but those supporting those other programs. The other route to a big audience is designing a program that tells people what they want to hear, which usually ends in not challenging your viewers/readers at crucial moments. So if your primary interest is in doing this job correctly, you usually have to give up the access, the money and the audience. You can go without the first two and still do a good enough job. But to have an impact, you have to reach people. So you have to find another way….”

    (David Brooks is no doubt puzzled by all this.)

      • From the article:

        “It was a faustian bargain—and it certainly made editors at National Public Radio squirm.

        The deal was this: NPR, along with a select group of media outlets, would get a briefing about an upcoming announcement by the U.S. Food and Drug Administration a day before anyone else. But in exchange for the scoop, NPR would have to abandon its reportorial independence. The FDA would dictate whom NPR’s reporter could and couldn’t interview.

        “My editors are uncomfortable with the condition that we cannot seek reaction,” NPR reporter Rob Stein wrote back to the government officials offering the deal. Stein asked for a little bit of leeway to do some independent reporting but was turned down flat. Take the deal or leave it.

        NPR took the deal. “I’ll be at the briefing,” Stein wrote.”

        The story doesn’t get any more uplifting after that. And we wonder why so many people are cynical, distrust the news media, and feel alienated from their government…

        (That said, thanks for the link.)

Leave a Reply

Your email address will not be published. Required fields are marked *