Retracted articles and unethical behavior in economics journals?

Stan Liebowitz writes:

Have you ever heard of an article being retracted in economics? I know you have only been doing this for a few years but I suspect that the answer is that none or very few are retracted. No economist would ever deceive another. There is virtually no interest in detecting cheating. And what good would that do if there is no form of punishment? I say this because I think I have found a case in one of our top journals but the editor allowed the authors of the original article to write an anonymous referee report defending themselves and used this report to reject my comment even though an independent referee recommended publication.

My reply: I wonder how this sort of thing will change in the future as journals become less important. My impression is that, on one side, researchers are increasingly citing NBER reports, Arxiv preprints, and the like; while, from the other direction, journals such as Science and Nature are developing the reputations of being “tabloids,” publishing articles that are newsworthy but often empty of content.

Based on your side of the story (which is all I have to go on), I find the behavior of those two economists to be unsettling. The very fact that they wanted to suppress a criticism—that itself makes me suspicious. It would be much more appropriate for them to request space for a rejoinder in the journal.

Some attitudes surprise me. For example, on his blog, journal editor Steven Levitt wrote, “Is it surprising that scientists would try to keep work that disagrees with their findings out of journals? . . . Within the field of economics, academics work behind the scenes constantly trying to undermine each other.” See my discussion here.

To get to your question. I am not an expert on the econ literature and am aware of only two retractions. The first is the paper by Emily Oster, a young economist who engaged in a several-years-long battle with public-health researchers regarding a theory of sex-ratio changes in developing countries. She eventually gave up and admitted that the public-heatlh researchers were right and she was wrong. (But she did not, as far as I know, retract her claims that economists benefit from having a special kind of reasoning.)

The second retraction I know of in econ is Bruno Frey‘s admission that he had self-plagarized (fulfilling “Arrow’s theorem” which states that any unique result can be published up to five different times). This was a bit of a scandal; he was reprimanded by the editor of a journal that had published one of the redundant papers.

In any case, I think we can agree that the number of retractions is all too low. Here’s another story. Many years ago I had a colleague who showed me a manuscript of a paper he had submitted to a top journal in his applied field. I looked over the paper and realized it had a serious flaw. I won’t go into details, but the short version is that he had fit a probability model inappropriately. Nothing wrong with the computation, but the model didn’t fit the data. The parameter estimates didn’t mean what he thought they did. My colleague agreed with me (or, at least, he assented when I gave his criticisms) and he went on to fit a better model. But in the meantime, the original paper was accepted by the journal! I naively thought he’d withdraw the article but, no, he published it essentially as is. I asked him how he could in good conscience do this, and he said that he thought the paper still had good stuff in it and that it was an advance in that field. And, the funny thing is, maybe he was right! Who am I to say?

I was also more peripherally involved in a project with experimental data where the researchers had a before-after comparison with the treatment group and a before-after comparison with the control group. There was a large and statistically-significant improvement in the treatment group—and also a large and stat sig improvement for the controls. The difference in differences was not statistically significant. So what did the researchers do? You guessed it: they just reported the difference for the treatment group. This one bothered me a lot, but I didn’t follow up on it, and I’m not sure if maybe they cleaned it up before publication (I only saw an early version of the report).

Back to the main thread. My impression is that the biggest concerns regarding retractions right now involve three issues:

1. Biased research in medicine, including: (a) drug companies wanting success and manipulating data and analysis to get there; (b) manipulation through selection (commissioning many studies and reporting only the ones you want), and (c) labs where the researchers are under intense pressure to get results, so they manufacture them. An extreme case of (c) is Robert Gallo stealing HIV and pretty much getting away with it.

2. Pure noise corrupting the signal. Here I’m thinking about professional researchers who don’t really do research, but they manage to publish their papers somewhere or another, and those journals and conferences which have nothing to do with science but exist only to make money off of publication fees etc. You might feel you could ignore these because they’re not prestigious journals, but can an outsider really distinguish between the (legit) Journal of Money and Banking and some fake journal out there? Also we hear about researchers in other countries who spend their time swamping legitimate journals with papers that are made up, plagiarized, etc.

3. Errors arising in serious research due to selection bias, confirmation bias, etc. Psychology researchers are particularly worried about this one: the idea is that if you look hard enough you can find confirmation of just about anything. The most famous recent case is psychologist Daryl Bem, who published a paper on ESP which was supported by 9 different experiments! That sounds pretty impressive (and it impressed the journal editors enough that they published it), but it turns out that, yes, if you’re looking for an effect, you can indeed find it, wrongly, in 9 different ways. There’s a growing concern in psychology research that this sort of thing is happening all the time–not just on joke topics such as ESP but in more respected areas of research such as motivation and social psychology.

Liebowitz wrote back:

I was already familiar with Frey’s problem (I was recently told that his contract at Zurich was not renewed). I don’t consider self plagiarism, or even pure plagiarism as serious a “crime” as making stuff up. Even with his brazen behavior I doubt that the same fate would have befallen him if he were in the prime of his career. The cost of letting him go now is much lower, although he was mentioned as a possible European Nobel prize candidate, which I suspect is no longer the case. Interestingly, he is supposed to attend a conference that I am attending, on scholarly publishing of all things, in about a month. I am looking forward to hearing what he has to say. I have trouble believing he will show up, although I am always surprised at the chutzpa that some people have to be able ignore problems that would make me ashamed to show my face. One of the two authors that I criticize about impropriety in piracy research was Frey’s student and one of the self-plagiarized papers was coauthored by the two of them, although the fellow I believe to have made stuff up was too junior to be given credit for the deceit with Frey. Small world.

To which I replied that I agree that self-plagiarism is not as bad as plagiarism, which in turn is not as bad as outright fraud. (In many cases, though, plagiarism does involve fraud, for example a secondhand story presented as factual, or a plagiarized passage that is so badly garbled that it loses its original meaning.)

Also, I agree on the chutzpah thing. I’m stunned that people don’t just admit these offenses when they’re caught.

16 thoughts on “Retracted articles and unethical behavior in economics journals?

  1. There are numerous cases of unreproducible research in economics that I and my co-authors have documented and published, but none has led to a retraction. [B. D. McCullough and H. D. Vinod,American Economic Review 93(3), 873-892, 2003; B. D. McCullough, Kerry Anne McGeary and Teresa Harrison,Journal of Money, Credit and Banking 38(4), 1093-1107, 2006;B. D. McCullough, Kerry Anne McGeary and Teresa D. Harrison, Canadian Journal of Economics 41(4), 1406-1420, 2008]

    Levitt himself has published non-reproducible research that he did not retract (Levitt, American Economic Review, 1997, 87(3), pp. 270–90.) McCrary found a coding error in Levitt’s work, fixed it, and the new result contradicted Levitt’s conclusion (McCrary,AER, 2002, 92 (4), pp. 1236-43.) Levitt’s response (Levitt, AER, 2002, 92 (4), pp. 1244-50.) was not to retract but, essentially, to say, “OK, McCrary has given you one reason that my original conclusion was wrong, let me now give you ten other reasons that my original conclusion was right.” He wasn’t going to let the data (and his own code) stand in the way of his priors!

    Anyhow, I have done much replication work in economics. When the Liebowitz/Olberholzer dispute arose, I did what I always do in those situations: I obtained the original articles and all relevant writing by the parties, and read them carefully, examining code/data if available. I concluded that Liebowitz was right and Olberholzer was wrong. Levitt’s handling of Liebowitz’s comment, as described in one of Liebowitz’s SSRN papers, is appalling. Interested persons can read the relevant Liebowitz SSRN paper “Sequel to Liebowitz’s Comment on the Oberholzer-Gee and Strumpf Paper on Filesharing”.

  2. You might also be interested in this analysis, from some time ago:
    “Replication in Empirical Economics: The Journal of Money, Credit and Banking Project”
    William G. Dewald, Jerry G. Thursby and Richard G. Anderson
    The American Economic Review > Vol. 76, No. 4, Sep., 1986 >
    http://www.jstor.org/stable/10.2307/1806061

    Finding errors in other disciplines might have led to retractions, but it seemed not to, even when errors were clear.

  3. There was Jacob Viner’s famous cost curve mistake, which was caught by the draftsman who drew the curve for him but not by Viner or his editors.

  4. Pingback: Economist's View: Links for 07-31-2012

  5. I think progress is slow. In 2008 we released a large (75 gig) database of interest to marketing and economics. [http://www.symphonyiri.com/Academics/tabid/172/Default.aspx] At last count, there were about 70 papers (many in working paper stage) using this data set in some way. Last week the FIRST issue about a researcher depositing data came up with an accepted paper:

    American Journal of Agricultural Economy: “authors are expected to submit their datasets and associated documentation (perhaps in a readme file), for readers to download from the AJAE website. Authors are also expected to provide whatever other material is needed to ensure that their results can be replicated (this might include code or pseudo-code used in estimation). Authors are encouraged to comply with all of this policy, but the editors would prefer partial compliance over non-compliance. The submitted files will be accessible as clickable links to the online version of the published paper. It is understood that in cases where the data are proprietary or confidential authors cannot fully satisfy this policy. Such cases should be brought to the attention of the editor at the time of initial submission.”

  6. “I am not an expert on the econ literature and am aware of only two retractions. The first is the paper by Emily Oster, a young economist who engaged in a several-years-long battle with public-health researchers regarding a theory of sex-ratio changes in developing countries. She eventually gave up and admitted that the public-heatlh researchers were right and she was wrong.”

    I’m not sure Oster retracted her research. You can publish research which is correctly performed and reported, but supports a theory that just turns out to be incorrect. That’s tough luck but not culpable. She eventually admitted the public health researchers were right, but I’m not sure there was a retraction – which would imply gross errors in data, or calculations, or methodological flaws. It was just competent research which didn’t work out.

  7. Andrew, check this out:

    http://www.econjobrumors.com/topic/unreplicable-jfe-paper

    Someone found a serious problem with a highly cited empirical paper published in one of three top journals in Finance. It seems like every body agrees that the journal would not publish a short note outing the error.

    Rumor has it that a large fraction of finance papers has serious issues. Not to mention a large fraction of papers using “proprietary data” (allegedly obtained from financial institutions under no disclosure agreements) that are by definition unreplicable.

    How screwed up is that?

  8. Pingback: Making Scholarly Data Public? | Opinion – Policy Nexus

  9. Pingback: Macro, micro, and conflicts of interest « Statistical Modeling, Causal Inference, and Social Science

  10. Pingback: Ethical standards in different data communities « Statistical Modeling, Causal Inference, and Social Science

Comments are closed.