Kristof/Brooks update: NYT columnists correct their mistakes!

Who will issue a correction first?

Nicholas Kristof, who uncritically cited the hurricane/himmicane paper which appeared in the prestigious Proceedings of the National Academy of Sciences but then was debunked in a stunning round of post-publication review?

David Brooks, who botched some historical economic statistics and, in an unrelated incident, uncritically cited some education statistics from magazine publisher Ron Unz that were later debunked in a stunning round of post-publication review?

The clock is ticking . . .

Before going on, let me emphasize that there’s no shame in making a mistake. I am not in any way mocking Kristof and Brooks for promoting bad research. These things happen. The only way to be sure you’re not promoting bad research is to promote no research, and that’s the road to being Mike Barnicle.

I applaud both Kristof and Brooks (along with the other NYT columnists, most notably Paul Krugman) for engaging with social science research, for reading widely and being interested in the connections between research and policy.

And then when it turns out they made a mistake, they can correct it. No shame in that, no shame at all. It’s just opening things up a bit, and it’s not such a bad thing for readers of the New York Times to realize that columnists can get things wrong. There was only one Art Buchwald, after all, and he’s no longer around.

The corrections

To make things easier, I’ve drafted admissions of error for both columnists below. I have not attempted to imitate their style but rather to express, as sincerely as possible, the content that I think would be appropriate in each case.

Kristof: One thing we’ve learned in recent politics is that perception is as powerful as reality. In areas ranging from economics to legislation to counterterrorism, outcomes are often decided as much by expectations as by the hard numbers. So when I read recently of a study finding that the naming of hurricanes affects people’s reactions to them, I wasn’t surprised, and I wanted to share this information with you, my readers. It turns out, though, that this study was sloppy and should not have been published. I’ve read enough critiques and talked with enough experts to realize this. I regret using a column to promote research which happened to be wrong, but I don’t apologize for doing it. The research claim made sense to me and it was published in the prestigious Proceedings of the National Academy of Sciences. Hey—it turns out those guys make mistakes! Luckily I have this column where I can quickly correct my mistakes. Let’s hope the National Academy of Sciences figures out how to do this too.

Brooks: Over the past two years I’ve talked a lot about humility and the need to balance humanistic and scientific thinking, a balance that is central to politics. As political leaders from Margaret Thatcher to Paul Ryan have taught us, warm feelings are no substitute for hard facts. In today’s column I’d like to exercise a bit of humility myself and tell you about a couple of factual mistakes I’ve made over the past few years. . . .

In the immortal words of Chris Hedges: That wasn’t so bad, was it?

P.S. When looking up Kristof’s column, I found this:

Correction: June 16, 2014
Nicholas Kristof’s column on Thursday misspelled the middle name of a Vanderbilt professor. She is Cecilia Hyunjung Mo, not Hyunjong.

Similarly, there was this memorable correction from a David Brooks column a couple years ago:

An earlier version of this column misstated the location of a statue in Washington that depicts a rambunctious horse being reined in by a muscular man. The sculpture, Michael Lantz’s ‘Man Controlling Trade’ (1942), is outside the Federal Trade Commission, not the Department of Labor.

I think if you’re willing to correct the spelling of one vowel in somebody’s middle name or the location of a statue of a rambunctious horse, you should be willing to correct the erroneous statement, “Researchers find that female-named hurricanes kill about twice as many people as similar male-named hurricanes because some people underestimate them,” or various erroneous economic and education statistics. Again, no shame in issuing a correction. We all make mistakes, dude.

56 thoughts on “Kristof/Brooks update: NYT columnists correct their mistakes!

  1. Have the editors of PNAS issued a correction? Has the Journal admitted, ( in Andrew’s words ) “this study was sloppy and should not have been published.”

    Has either the University of Illinois at Urbana-Champaign or Arizona State University issued a press release owning up the sloppy work done under their roofs?

    • Rahul:

      Not to my knowledge. Interestingly enough, Kristof quotes the editor of the PNAS paper:

      “It’s a mistake to assume that gender bias is only or mainly about misogynists,” said Susan Fiske, a psychology professor at Princeton University and the editor of the hurricane study. . . .

      Of course, Fiske could be perfectly correct about the general point, even if the paper in question is uninformative. That’s the story with lots of these papers. The theory on the fertility-and-clothing-choice paper could be generally true as well, even though the published research on the topic provides approximately zero evidence in support of the claims. This is one thing that people seem to have difficulty with, separating the general plausibility of the theory from the strength of the evidence in the published paper. And, as I keep saying, I have no problem with journals publishing theories and speculation. My problem comes in when they systematically overstate the empirical evidence.

      As to Brooks, he quoted mistaken numbers by magazine publisher Ron Unz. Unz never quite admitted his mistakes, but at one point he did remark that he came up with his numbers after “five minutes of cursory surname analysis.” I think this is as close to an apology and admission of error that we’ll ever get from him—and it’s not that close. I still have some hopes for Brooks, though.

      • I think we are being too soft on the Journal, the editors, referees, the departments concerned, the universities where the work was carried out & their PR guys. Perhaps even the relevant academic bodies (e.g. Psych / Stat associations)

        The errors were methodological. Imagine an outsiders situation now. Say Kristoff. He’s no expert on the methods. Who does he trust? He’s seeing a controversy, a back and forth between the Hurricane guys and another academic.

        But the Journal involved, the respective departments, the concerned universities, editors, or subject associations are strangely quiet. If you want to fault someone (and ask for a correction) I think you should start here.

        In the absence of this I think it’s excusable for a journalist to stand by his story till the dust settles.

        • Rahul:

          I definitely think the case for Brooks to correct is much clearer than Kristof. The Brooks examples are clear factual mistakes, whereas Kristof would have to ask some experts. I do think that at this point just about any outside expert would agree with Jeremy Freese on the himmicanes, but it could take awhile for Kristof to find someone to ask, and of course it’s not his highest priority, it was just a few paragraphs in one of his columns.

          I agree that it would be useful for the journal editor to retract her quote, and it’s an interesting question as to why I wasn’t more bothered by that. For the journal itself, I think a better approach than retraction would be to have more open and available post-publication peer review.

        • Imagine how easy it would be for their respective departments to find experts to judge the work. They are literally swimming in experts.

          But I bet we’ll never see a formal note by either university nor department on this issue. How hard would it be for the departments to post a note on their website saying “This recent study by our faculty seems seriously faulty. We regret the mistakes and caution anyone against relying on the results”

          IMO, this is the problem you should be fighting.

        • Agreed. While I obviously can’t say for sure, it seems like Andrew’s outrage is more politically motivated than it is motivated by “correcting the record”. I’m curious to hear Andrew’s reflections as to why and where his efforts have been focused. Even the authors of the original paper have yet to correct their mistakes: https://scatter.wordpress.com/2014/06/17/the-hurricane-name-study-tries-again/

          Of all the people to be upset at and call them out on it, it seems like the paper authors, the journal, the university, the departments, etc. are who should first admit their mistake. Until that happens, I have no expectation for a non-specialist to sort through reality.

        • John:

          I don’t understand what you mean by “politically motivated” but in any case I’m not sure why whatever motivations you impute to me should be a reason for false statements to be published in authoritative places.

          Other than that, I think we’re mostly in agreement. in my blogs on the himmicanes/hurricanes study I’ve repeatedly linked to Freese’s writings and discussed the frustrating refusal of the authors of that paper to consider that they might be wrong. I had a few posts on this before mentioning Kristof at all. Similarly in my discussion of the Ron Unz stuff, I only mentioned Brooks in passing in much of my discussion. But at some point it seemed to me that he had some responsibility as the author of a widely-read column.

          Brooks and Kristof have big audiences. They (unwittingly) spread confusion, now I think it would be good for them to clean up the mess. As noted in my comment above, I definitely think the case for Brooks to correct is much clearer than Kristof, but I do think that at some point Kristof should be able to talk with enough experts to clarify this.

          Unfortunately at this point I don’t see the authors of the papers ion question backing down. But Brooks and Kristof have less invested in this; it shouldn’t be so hard for them to admit their mistakes.

        • Now that I’ve been called on it, my comment about motivations was not appropriate. So I’ll retract that statement: mistake admitted.

          I suppose I was just fishing (without a license and haphazardly) for an explanation for the focus on Brooks (and also Unz); as surely, there are other journalists with the same issues. But note, I never implied that it was okay for Brooks to have false statements published in authoritative places. My point was just that non-experts have good reason, in my opinion, to conclude that the truth/falsity of those statements are in dispute amongst experts. Perhaps Brooks should state that much.

        • I agree with Rahul – can we really expect journalists to correct their articles when the scientific journals don’t? On what basis? You say the journalist should ask an expert, but isn’t that what PNAS was supposed to do before publication?

          And if just about any outside expert would agree that the analysis is unacceptable, then who reviewed this paper, and why don’t the authors, their institutions or PNAS issue a correction, now that they have been made aware that everyone agrees about the mistake? I think it’s really science that needs to clean up first, not the mass media.

          As for the “debunked”, well … I have played a bit with the data myself (see http://wp.me/pEJOT-1iA ) and yes, there is probably too much structural uncertainty to have any confidence in a strong effect of the name. This statement shouldn’t have passed peer-review in any journal, specially not in a high IF journal. Yet, even if that is no excuse, I have seen worse examples of cherry-picking. And I found many “debunking” attempts on the web equally shallow in their statistical reasoning.

          In the end, we have a study that seems to show implicit gender bias in the perception of hurricane risks in experiments, and some historical data that are inconclusive regarding the real-word effect of the name. If presented in that way, it would probably have been an OK paper for a specialized journal. Sadly, through exaggerating the confidence for large real-world effects, they were rewarded with a publication in PNAS.

        • Also strangely quiet is Joseph Hilbe, a first rate statistician and a coauthor of the paper. Shouldn’t he be involved in either defending or repudiating the analysis? Was he even involved with the analysis? What was his role as coauthor? To me this is the most puzzling aspect of this whole affair. I agree with Rahul — from the point of view of a journalist, this might look like an arcane dispute between experts on either side, rather than a mistaken study slipping through the peer review cracks.

  2. While I’m of similar political persuasions as Krugman, I would not agree with the statement that he reads widely in the social science research. He primarily looks at research on income/wealth inequality and racial factors in voting behavior, all of which fit his world view (in economics, since he is a professional economist, he of course follows that literature). Now, pretty much every columnist does that–George Will has cited work by Ansolabehere, de Figueiredo and Snyder on how money donated to campaigns is simply a function of personal preference, having nothing to do with an expectation of policy or regulatory returns (http://townhall.com/columnists/georgewill/2002/12/30/theres_nothing_evil_about_campaign_spending#breaking). Anyone who has spent any time at all on the inside of the political world knows how false this is (easily as false as sex determination of hurricanes), with much more pernicious consequences. Yet since these authors use “sophisticated” statistical techniques, this type of research gets a pass (including from this blog). I would go a step further and claim that errors/misanalyses/overall bogusness is just as high with complex statistical techniques and that while you are good at pointing out problems with straight-forward simple analysis, you miss this other type (not sure what you can do about it–it takes real work to handle a complex analysis).

  3. Pingback: Data Viz News [60] | Visual Loop

  4. While you’re at it, work on Bernstein too, and that exactly-backwards article about head injuries. It should _really_ be easy for him to issue a correction, since all he did was repeat an error in a press release about a study. But although it’s been pointed out to him, he hasn’t corrected it.

  5. As social scientist we should examine incentives.

    Presumably making mistakes and not correcting them ought to diminish reputation, and perhaps readership.

    At the same time admitting error may also diminish reputation (perhaps branded as sloppy).

    Would be interesting to analyze incentives of employer, reporter, reader, and strategic interplay with competitors.

    In practice ethics is not God given (if anything it is given to a God) but is a social norm. A strategic equilibrium. Apologies may well be off the equilibrium path.

    • Fernando:

      I agree with the spirit of your comment but I think that describing the situation as a “strategic equilibrium” is a bit too strong or too cynical. My impression is that most of the people involved are doing their best. I don’t think Kristof or Brooks want to publish errors, nor do I think the researchers in question want to do things wrong. But there is this problem, as I noted in an earlier comment, that people seem to have difficulty separating the general plausibility of a theory from the strength of the evidence in a published paper. And, as I keep saying, I have no problem with journals publishing theories and speculation. My problem comes in when they systematically overstate the empirical evidence.

      I agree that incentives are a big part of the problem here (overstating the evidence gives you a potential big win, in the form of publication in a prominent journal, and the downside risk is low), and I suspect that if the incentives could be changed, this could lead to a better understanding of statistics and science (as people will have more of a motivation to gain such understanding), but I think it’s oversimplifying to characterize the behavior here as being simply strategic.

      • Andrew:

        I agree that they probably don’t want to publish errors. Moreover I was not writing about incentives to exaggerate etc.

        My comment is only about incentives to apologize once a mistake is uncovered. Maybe they want to apologize but can’t, maybe some of us in their position would behave the same way, etc. We need to understand incentives to apologize or correct mistakes.

      • PS. Suppose I am the only reporter who apologizes after each mistake. My readers will learn that I apologize a lot but may also come to believe I make more mistakes than average if no one else apologizes. They might begin to trust me and perhaps read me less.

        If so, there are strategic implications in that the payoff to an apology depends on whether others also apologize regularly.

        This is a simple story but given the ubiquity of denials, dearth of apologies etc in joirbalism, politics, etc. I think there is something strategic to it.

      • PPS You might start modeling this a a prisoner’s dilema with “no apology” in the place of “fink”. So the answer to the question at the start of your post is: “neither of them will apologize”.

      • One thing that you didn’t mention in your earlier post is that it’s not just that pundits have very strong incentives to maintain an illusion of infallibility, it’s also the case that people generally process information about a source’s reliability asymmetrically and this means that maintaining credibility is extremely fragile.

        Specifically, people will more strongly recall, and weight, that someone has been proven wrong than that they’ve been proven right. Furthermore, a single admission of error will disproportionately damage someone’s credibility, and particularly so, because a) this negative information is seen as especially important, b) the admission of error removes all ambiguity about whether an error was made, and c) because such admissions are rare, they are especially memorable.

        It’s difficult to overstate how much various intellectual subcultures combat this natural tendency by turning it on its head and seeing a willingness to admit error as a necessary prerequisite to the discovery of truth, socratic-like. But those of us who have strong affiliations with such cultures — say, in academia or science — will correspondingly be somewhat naive about how strong the opposing natural tendency is elsewhere.

        And while the natural tendency is strong generally, it’s especially and arguably maximally strong at the nexus of opinion journalism and politics. As you discuss in your prior post, this is a subculture where there is everything to lose by admitting error.

        Your passing mention of Krugman is actually very appropriate because most of his professional instincts are yours, they are academic and scientific and he clearly sees admission of error as an essential component of professional integrity and credibility. Nevertheless, in his punditry and popularizing work, there’s a clear history of the implicit culture clash about this. In both directions. For his part, he’s had to learn to be more careful about when and how he admits error and, to various degrees, somewhat privilege the clarity of power of his message over his dedication to an academic notion of intellectual integrity. But, with regard to others, both his economics popularizing peers and his punditry peers, he continues to expect an academic level of intellectual integrity that includes requisite admission of error. He is frustrated and angry that his expectations aren’t met.

        I’ll grant him that with regard to the people he criticizes that he’s far more along the intellectual integrity direction than they are. But it’s a bit surprising to me that he’s not realized, from his own experience of how his admissions of error are perennially carted out by his critics as proof of his lack of credibility, why exactly it is that everyone he criticizes aren’t similarly willing to admit error.

        For Brooks or Kristof to admit to a non-trivial error, as you suggest they do, would be for their admissions to immediately become high-profile fodder for their critics. And not just then, but forever more — a link to the admission, a quote of it, will be repeated at any occasion when a club to use against their credibility is wanted. But more to the point, such a club will work. It will work because people strongly recall that someone was proven wrong about something and especially they recall when someone admitted to it and, finally, they weight that information very heavily when evaluating credibility.

  6. I think a lot of systems work on an implicit / explicit cascading of trust & authority: e.g. The surgeon takes the word of the pharmacist that his drugs were properly compounded. The patient doesn’t have to vet his surgeons protocol every time the guy operates. Et cetra.

    On point that annoys me is the lax & forgiving attitude of some towards the peer review step in such a cascade of trust. When a student or journalist or policy-maker reads a peer-reviewed article ( in a reputable Journal) the expectation is that a fair degree of vetting and due diligence has been made by the Journal’s editorial process. i.e. No obvious methodological flaws, no trivial errors, theoretical basis that seems reasonable to experts from the area. etc.

    No one is demanding absolute correctness but the mistakes such as the ones discovered in the Hurricanes paper or the Fertility paper seem of a far less demanding nature. Now if once in a while a mistake happens the right attitude would be to own up the mistake. Not just by the authors but by the editors, referees, university departments, funding agencies etc.

    Instead I see excuses like (a) “Oh! But we the referees don’t get paid” (b) “They only had 10 days to review it” (c) “They get no credit for it so we cannot expect them to be accountable” etc. (d) “Can’t shame them since they are entitled to absolute anonymity” etc.

    I think that’s the wrong response. If the system has flaws, let’s fix them. If greater due diligence means fewer papers get published so be it. But I don’t see any constructive use of a system where we consciously & intentionally dilute the responsibility of peer-review to the point where no one trusts peer review at all.

    You cannot expect a NYT journalist or a DC policymaker to understand & judge the minutiae of methodological validity. That’s a call for experts. That’s why we have peer review.

    PS. And reactionary proposals that call for getting rid of peer review entirely are throwing out the baby with the bath water IMO. That’s a fix worse than the problem itself.

    • Rahul:

      I think that peer reviewers accept things such as the himmicane and fertility papers because the reviewers are sympathetic with the general lines of research (you can see that, for example, in Kristof’s quote from Susan Fiske). Once you think something is generally on your side, it’s natural to accept it, based on the general belief that even if the article has some flaws, that’s ok because all work has some flaws. It’s natural, then, to consider statistical criticisms as mere technicalities and to even get angry at critics for being picky. Fundamentally I think there are some important statistical misunderstandings, most important, perhaps, being the idea that statistical significance can be taken as some sort of proof. What bugs me is not so much that this work gets published as that it is taken as offering strong support for the theories in question.

      • As a reviewer, you also have to take the authors on trust for many aspects of the work. For example, you have to assume that they are not hiding evidence (other models, etc.) that would undermine their position.

        I review about 20 papers a year, and 25-30 when you count the R&R stage. There’s no way that I can spend more than a few hours on each one.

        • Well, maybe you should think about reviewing fewer papers then? Actually, it’s only the time you spend on the ones you *accept* that matters.

          Stealing from the networking dictum “Be liberal in what you reject but extremely conservative in what you’ll accept”

          That’s my opinion.

        • Rahul:

          It depends. I think that in a 5 minute review of a paper I can often contribute much more than 1/20 of what I would contribute in a 1 hour review.

        • Andrew:

          But that’s the law of diminishing marginal returns. Your decreasing contribution is still essential to the point we vet a paper to where we are reasonable satisfied.

          PS. Perhaps, instead of three reviewers doing a detailed multi-week review (at least in theory) we need 50 reviewers each spending an hour? I don’t know.

        • So that’s the secret to your productivity. Your hours are 100 minutes long. That’s an unfair advantage.

        • (as the earlier Anon)
          I would like to review fewer papers, but I also realize that I have an obligation to the profession to do my share. Journals already find it difficult to recruit qualified reviewers.

          And regardless of the amount of time devoted, reviewers still need to take authors on trust that they are honestly reporting their analyses and not omitting details of consequence that conflict with their conclusions. What alternative does a reviewer have, other than to request the data and run their own analyses — a worthy task for replication studies, but beyond the scope of what we should ask reviewers.

    • @rahul

      1. Please define “reputable journal”. As we have discussed before these are likely to publish both the best and worst papers (high variance) thought better papers on average (assumption).

      2. Those of us advocating end of peer review are not anarchists. I personally want a separatioi of concerns. Specifically (a) certifying the quality of the research (b) curating “interesting” findings. Part (a) could be almost automatic, largely based on checklists, and one could obtain certification for any paper, even if in ArXiv or SSRN.

      PS It is not that hard to design a mechanism where certification bodies maintain their independence even if auhors pay for certification.

      Tje present system mixes (a) and (b) with some ugly tradeoffs and bad results.

      • @Fernando:

        Re. (1) I don’t buy your assumption. I think of the “good” journals in my field as more likely to publish the “best” papers & less likely to publish the worst papers. If you have evidence against this I’d love to see.

        “Reputable journal” is, of course, subjective. In Chemistry / Chem. Eng. I think of the AIChE Journal or PRL or JACS as reputable, for example. They derive their “reputability” because my priors say (rightly or wrongly) that papers published in them will be vetted more closely, by better referees and thus of a higher quality, on average, than say the Iranian Journal of Chemical Engineering.

        (2) Your goals are noble. But till you have such a working system, peer review is what I am stuck with. And I think peer-review (with all its faults) is better than no-review.

        • @Rahul

          I think your position is more understandable if, as I understand it, your field is chemistry. I don’t get the impression that chemistry is as tabloid prone as health, psychology or social science. in which case objectives (a) and (b) in may comment are nicely aligned, so not as much tension.

          In health, on the other hand you face relentless pressures to show how eating two blueberries a day reduces cancer risk by 10%. And you need to do this at least 12 times a year to keep up with the publications expected of you. And all of these publications need to have policy implications for the Research Translation office to “translate”, and for you to get the Summum Bonum of an academic career: A citation in the NYT.

        • @Fernndo:

          Fair enough. But since a fair chunk (majority?) of research in Nature / Sci. etc. tends to be from Chemistry, Physics and other “harder” sciences I think @Andrew’s persistent namecalling (“tabloid journals”) is uncalled for.

          If the problem, as you describe it, is more specific to psychology or social science then the introspection needs to happen in those particular fields rather than generically calling these journals names. In subjects I tend to read Nat. / Sci. are definitely as good as the average good Journal, if not better.

        • Rahul, I think calling the social sciences “soft” sciences is part of the problem. When we consider them “soft”, we can be less rigorous, thus we can publish exciting stuff without as much care. That might also explains to some degree why Nature / Sci. etc. tend to be more tabloidy for the Social Sciences.

        • Yes, but it’s not like the people refereeing the social sci. papers, and the editors approving them are from outside the disciplines. Ergo if imputed “softness” is the problem it is more an internal problem than an external one (assuming what you and @fernando write is true and it is indeed a soc. sci. specific “tabloidiness”).

          It isn’t as if some condescending Physicist is reviewing the soc. sci. papers and letting them publish to a lower standard because he thinks they are “soft” so it is ok.

  7. “there’s no shame in making a mistake”

    I think this depends on the reference frame. If I understand you correctly, we need a culture of admitting mistakes. In this case, admitting (or making) a mistake is part of the normal business of work (it’s maybe a mild embarrassment if silly mistakes are made, like spreadsheet errors).

    But it’s different if you live in a culture where one does not admit mistake until they got shot down so often and by so many persons that one does not have a choice. In that case, admitting the mistake is a major embarrassment, because in addition to admitting the original mistake you also have to admit that all your attempts to fend off criticism were wrong-headed and that thus, at the end, that you did not understand them for most of the time (or, alternatively, that you pretended not to understand them). I.e. admitting a simple mistake has been transformed into admitting that your entire approach how to do work was wrong.

    I think we live much more in the latter culture than in the one I think you are suggesting (also when it comes to academics). The first approach would be the better not so much just because mistakes are actually admitted, but because it would generally favor a more critical approach to research and the reporting of research: if the idea that mistakes are made were more present, I guess we’d see more cautious reporting and less cocksure announcements in media – especially w/r/t op-ed writers who often do not have formal training in the fields they are interested in.

  8. > I applaud … Brooks … for engaging with social science research…

    He’s not engaging with social science research. He picks what suits him to support his NYT gig, book sales, etc. Recall how Issenberg blew the whistle on him after Bobos in Paradise – http://www.phillymag.com/articles/booboos-in-paradise/ Nothing has changed. Mark my words. Nothing.

    (And, just for fun – http://www.popehat.com/2014/01/04/fear-and-loathing-in-falls-church/. I think I found that link here actually. I can’t recall anymore but it’s worth repeating.)

  9. Andrew, great post and discussion.
    The comments were even better.

    I’m writing a piece on this controversy. Rahul, could you give me a full name so I could cite your opinion that:

    “we are being too soft on the Journal, the editors, referees, the departments concerned, the universities where the work was carried out & their PR guys. Perhaps even the relevant academic bodies (e.g. Psych / Stat associations)

    The errors were methodological. Imagine an outsiders situation now. Say Kristoff. He’s no expert on the methods. Who does he trust? He’s seeing a controversy, a back and forth between the Hurricane guys and another academic.

    But the Journal involved, the respective departments, the concerned universities, editors, or subject associations are strangely quiet. If you want to fault someone (and ask for a correction) I think you should start here.

    In the absence of this I think it’s excusable for a journalist to stand by his story till the dust settles.”

  10. Pingback: Data Viz News [60] - sqoops

  11. Pingback: It’s not just the confidence and drive to act. It’s having engraved inner criteria to guide action. - Statistical Modeling, Causal Inference, and Social Science Statistical Modeling, Causal Inference, and Social Science

  12. Kristof’s defense, I suppose, is that researchers DID say that “female-named hurricanes kill about twice as many people as similar male-named hurricanes because some people underestimate them.”

    I tried to pursue a similar issue with the Times shortly after the Nepal earthquake of 2015. They ran a news story “Trying to Stay Ahead of Earthquake Disasters” about an earthquake “expert” who wanted countries such as Nepal to buy extra-strong $70 desks, which children could crawl under if an earthquate hit. With two students per desk, he said “the cost per life saved there is like $35.”

    I’ll assume he was quoted correctly, so this is factually correct.

    But what he says is, in any meaningful sense of the word, highly wrong and misleading. The desk will save lives only if a big quake hits that school during the lifetime of the desk, at a time when children are inside the school, and even then only if the school collapses, and children get under the desk in time, and the old desk wouldn’t have saved them from the debris but the new one does.

    My letter to the editor, and another to the corrections department, went nowhere. The quote, after all, was spelled correctly.

    I wrote to the Public Editor (a position since discontinued), and opened with the question, “When is it appropriate to mislead readers?” She used that question as the lead-in to a column, briefly said, “Never,” added that “a desk’s ability to protect children may be minor stuff,” (compared, apparently with misstating the location of an obscure statue) then went on to explore an unrelated issue. (http://www.nytimes.com/2015/10/04/public-editor/wanted-a-tougher-approach-to-truth.html)

    Presumably some Times readers realized that the earthquake guy was spouting nonsense, just to get coverage for himself and to build credibility for his upcoming trip to Bhutan. Others were likely mislead, trusting that the Times wouldn’t print it unless it was worth printing.

    Did the Times make a mistake? Should the editors admit it? And what can of worms would it open up, if newspapers did acknowledge mistakes of this sort, rather than limit their corrections column to things like misspellings?

    Clearly they’d rather not address these questions, which have no easy answer, but I thank Andrew and others who are pushing them.

Leave a Reply to Rahul Cancel reply

Your email address will not be published. Required fields are marked *