Skip to content
 

You don’t want a criminal journal… you want a criminal journal

“You don’t want a criminal lawyer… you want a criminal lawyer.” — Jesse Pinkman.

In what sense is it a “blood sport” to ask someone for their data?

That’s our question for the day. But it’ll take us a few paragraphs to get there.

1. A case of missing data

Jordan Anaya points us to this news article by Tom Bartlett, “The Criminologist Accused of Cooking the Books,” which wrote of “statistical irregularities in five published papers”:

These were more than mere rounding errors or minor oversights. These were giant red flags: percentages that couldn’t be true, standard deviations that didn’t add up, and a host of other issues that defy easy explanation. The numbers didn’t make sense.

The obvious next step in determining what went awry would be to reanalyze the data. That data is in the possession of Eric Stewart, a professor of criminology at Florida State University and the only co-author who appears on all five papers. It’s Stewart who could, presumably, clear up what happened. But so far he’s been less than forthcoming, refusing to share key data even with his co-authors. Meanwhile the information he has offered has only deepened the confusion. For instance, a recently published correction for one of the five papers is itself shot through with numerical oddities, leading to suspicions that the numbers don’t make sense because the numbers might be, at least in part, made-up. . . .

In emails and text messages sent to colleagues, Stewart has portrayed himself as the target of “data thugs” who are attempting to ruin his career. . . .

“Data thugs,” huh? I guess that’s like “methodological terrorists,” i.e., people who tell uncomfortable truths.

Bartlett continues with more details on disappearing data and researchers who don’t want to talk about it.

So far, so bad, it’s a Wansink-like story about some people getting caught with their data down. Maybe somewhere there were some data that told some story, but some messy things happened between data collection, analysis, and publication. And, now that the publications are out there, the authors don’t want to give anything up. No big deal, just another day in the world of Retraction Watch.

The story then continued in the way that one might imagine. Here’s Bartlett again:

[Justin] Pickett [a coauthor of one of the papers, who was concerned about the data problems and wanted to get to the bottom of things] had lost patience with Stewart. In June, he sent a letter to the editors of Criminology, the journal that had published their paper . . .

In short, the findings in the paper — the one that Stewart had so generously invited Pickett to be a co-author on at a crucial moment in his career — were not only meaningless but might well be fabricated.

The lead author of that paper is Brian Johnson, a professor of criminology and criminal justice at the University of Maryland and also a co-editor of Criminology. When Pickett first emailed to tell him that Stewart had seemingly included hundreds of duplicates in the data, Johnson realized this was bad news. “That’s a huge fucking mistake. Man, what a shitshow,” Johnson wrote in an email to Pickett. “Obviously we are going to have to retract the article or at a minimum update it with new data and findings.”

Cursing aside, it sounded like that the journal Criminology was gonna do the right thing.

2. Things go sour

But then came the backpedaling:

[Johnson] pushed back at suggestions that the paper might be partially based on fictional results: “I have absolutely no reason to doubt that the surveys are real and I do not think other people making accusations have any evidence of that either.”

Au contraire. As Bartlett discusses in his article, there was no evidence that one of these surveys was ever conducted, and the data from another one seemed to be in Wansink-level disarray.

And then:

[T]he editor in chief of Criminology, David McDowall, seemed less than eager about getting to the bottom of what might be wrong with this particular paper. He confirmed that he had seen the letter Pickett sent but says that he “didn’t read it in great depth.” As for the possibility of retraction, which Pickett has requested, McDowall was dubious about the concept. “I don’t even quite know what retraction is,” McDowall says. “I imagine that it could occur. I would think there would be legalistic implications.”

Ahhhh, now we’re getting somewhere . . . the kind of people who would make up data, they’re also the kind of people who would attack via frivolous lawsuits. This is the kind of selection bias whereby honest open people get pushed aside, while scammers get every benefit of the doubt. Everyone’s afraid the scammers will sue.

But it gets worse:

What seemed to disturb McDowall more than possible errors or fabrication was Pickett’s motivation in questioning the data in the first place. . . .

“It seems to me that it’s pretty hostile for Justin to start making these claims,” McDowall says. Pickett says “negative things” about Stewart, according to McDowall, although he couldn’t recall specific statements. . . .

The Javert paradox rears its ugly head! Call out misconduct and you get slammed for being a malcontent.

3. And now to our question

Bartlett continues:

“I think criminology is maybe a little behind other disciplines because we haven’t adopted the blood sport of ruining other people’s careers,” [McDowall] says.

Jeez, who are those picky people who want to ruin someone’s career, just cos they make up data. What next, are we going to retroactively fire A. J. Liebling from the New Yorker? Mark Twain made stuff up all the time, and nobody criticized him! J. K. Rowling too. Not to mention J. R. R. Tolkien. Did you think trolls and orcs were real??? So, sure, it’s ok to make up hobbits, quidditch, jumping frogs, all sorts of things . . . but you fabricate one little dataset, and all of a sudden people are out get you! Jeez.

In all seriousness, all these people ever asked were to see the damn data. In what sense is it a “blood sport” to ask someone for their data?

I am so so sick of this hyperbole. Terrorism, Stasi, bullying, blood sport, blah blah blah. For chrissake. They’re just asking this guy for his data.

4. And then it gets worse

Bartlett continues:

When the journal has run into similar issues, the editors have dealt with it in what McDowall believes is a more humane manner. “This is not the first time that papers were published in the journal that were complete gibberish,” he says.

Wha????

Did I hear that right???

I mean, sure, “This is not the first time that papers were published in the journal that were complete gibberish.” That’s true of lots of statistics journals too, I’m sure. In some ways, I admire the journal editor for this admission. Still, it seems to miss the point.

To draw a New York analogy . . . suppose you’re eating dinner at an expensive restaurant and you see a big fat cockroach crawl across your plate. You call the manager and he tells you to shut up: “Yeah, our kitchen is infested with roaches. So what? What’s your problem???”

I don’t think I’d go back to that restaurant again.

Sure, all restaurants have roaches. I accept that. But the manager’s supposed to feel a little bad about it, right?

After Bartlett’s story came out, the editors of Criminology issued a statement, which begins:

Recent events have brought our private deliberations as the Co-Editors of Criminology, the flagship journal of the American Society of Criminology, under public scrutiny.

“Private deliberations,” huh? You’re the ones who talked about “blood sports” and how you publish “gibberish.” If you wanted your deliberations to stay private, you didn’t have to go publicly attack your critics, dude.

They continue:

We believe the preferred approach is to employ the classical comment-and-reply model. Critics submit a comment to the journal, presenting their objections. The journal editors review the objections for importance and relevance, focusing on the potential contribution to scholarship. If the editors have doubts about the contents of the comment, they seek the advice of anonymous reviewers. If the editors accept the comment, they offer the original authors an opportunity to provide an equally detailed reply. Depending on the circumstances of the case and the nature of the reply, the editors may offer the author of the comment a final opportunity to respond.

This sounds like B.S. to me. “The classical comment-and-reply model,” indeed. That’s the “classical” model where the editors keep the gatekeeper role, which is what kept so many of these problems in the first place.

For some journals, maybe the comment-and-reply model works, but social science journals rarely run comments or corrections or letters to the editor with corrections.

See for example this story. In this case I noticed a major problem with a published article in the American Sociological Review. I sent a letter to the editor, who sent it to three reviewers, none of whom disagreed on the merits of my criticisms, but the editors declined to publish the letter because they judged it to not be in the top 10% of submissions to the journal. In this case there was no suggestion of research misconduct. The authors just made a mistake. So much for the classical comment-and-reply model.

Amusingly, the above-linked statement cites the American Sociological Review as an exemplar of the comment-and-reply model, even though in my own experience they couldn’t handle a comment—and it was a simple one, a pure statistical issue with no suggestion whatsoever of impropriety.

Here’s more on this story from James Heathers.

5. Criminal research

I’ve had some weird experiences in this field, associated with an article that Torbjørn Skardhamar, Mikko Aaltonen, and I wrote for the Journal of Quantitative Criminology which included the following passage:

We wanted to reanalyze the dataset of Nelson et al. However, when we asked them for the data, they said they would only share the data if we were willing to include them as coauthors. We did not want to do so, and so we found the papers ourselves.

A blog commenter asked what happened, and Skardhamar replied:

We did not include such a comment at first, but but then again: not using the original data needed an explanation, and that was how it was. (We could have asked yet again, of course, but I got the feeling it would not have changed the outcome).

So I guess I knew that there could be data issues. But I had no idea things could get as bad as described in Bartlett’s article.

6. The big picture

What really gets me about these editors is that they don’t act as if they care about the actual scientific claims being made. After all, if you really thought these claims were potentially important, you’d want their flaws revealed right away. Or if you thought the criticisms were themselves flawed, you’d like those flawed criticisms exposed. You’d be a hurry to get to the bottom of the story. Wanting to bury the criticisms or never publish them at all: that’s not the action of someone who gives a damn. Which is kinda sad for the editor of Criminology, given that criminology is actually an important applied subject. It’s not like he’s editing a journal in pure math, or art history, or something else of primarily intellectual interest.

It reminds me of something that someone, maybe Jordan Anaya, said about Brian Wansink, along the lines of: it’s sad, the critics know more about Wansink’s studies than Wansink himself does.

Kinda like the way that various big-shot social psychologists defend bad work: if the really cared about these claims, they’d be concerned about the quality of the theory and evidence. But if all they care about are (a) their friends getting good press and living stress-free lives, (b) a general feeling that academic psychology is a good thing, and (c) a belief that evolutionary psychology is true, then they’re much more motivated to shoot the messenger.

P.S. Just to be clear: I don’t have any direct information on whether anyone involved in the above story committed any crimes. What I think is that criminology is important, and it’s a scandal that the editors of the purportedly leading journal in the field don’t seem to care if they publish fake data, gibberish, etc. It was embarrassing for the field of statistics that one of our leading journals published that Bible Code paper, but that was back in 1994! And we’ve tried to clean up our act since then. The journal Criminology should clean up its act too, or it’s gonna get a reputation like Lancet or Psychological Science or PNAS, of being willing to publish just about any crap as long as it tells a good story.

33 Comments

  1. Psyoskeptic says:

    If you claim that a request for your data is an attempt to ruin your career then you’ve already told everyone the main thing they need to know about your data.

  2. Matt Skaggs says:

    “Why would I give you my data when all you want to do is find problems with it?”

  3. jim says:

    “In this case I noticed a major problem with a published article…The authors just made a mistake. “

    If the authors made a mistake, why would this even go to “comment and reply”? Comment and reply is for open disputes and questions of interpretation.

    Errors are a different issue. If there’s a math or other indisputable error in a paper, it should *always* be published by the journal that published the paper, ASAP, even if it’s impact on the results is modest or disputed. An error is an error. The journal should provide it’s readers with notification. If the authors wish to contribute a comment to explain the error’s impact on the results, great. But that’s a different question.

    • Anoneuoid says:

      Maybe the editor isn’t able to verify for themselves that there was an error? I’ve come across people (not a journal editor, just random people) who didn’t trust me that 1,500 “x” out of 10,000 “y” meant that 15% of y were x.

      No authoritative source said specifically 15%, only the 1,500 and 10,000 numbers. They did not trust themselves to even perform that step. It wouldn’t surprise me if there was some lack of confidence in verifying this type of thing:

      percentages that couldn’t be true, standard deviations that didn’t add up, and a host of other issues that defy easy explanation. The numbers didn’t make sense.

    • Andrew says:

      Jim:

      When I say there was an error, I don’t mean it was a numerical error in calculation. The error was in not considering an obvious threat to validity of the inference. The reviewers agreed with me, but their response was a mixture of (1) I’m correct but this is sort of letter is not what the American Sociological Review publishes, (2) it doesn’t matter because the research in the published article is so important, and (3) I didn’t demonstrate that correction of the error would materially change the results of the article. I agree with point (3), but: (a) it was extremely difficult to get to the data as this was requiring the navigation of multiple IRB’s and I eventually gave up (one door opens and then the next door closes), and (b) had the error been caught in the review process I’m almost sure that the relevant claims in the article would’ve been unpublished. The process had a sort of hysteresis, by which a certain argument was unpublishable (as it represents a strong claim not supported by the data), but if the error was not discovered before publication, the claim was considered untouchable.

      In this case, I think the appropriate response of the journal would have been be to publish my letter and link it to the published paper. It would’ve been ok to also include a responding letter from the authors thanking me for pointing out the error in their paper, but it wouldn’t really have been necessary. It was my impression that the #1 concern of the journal editor was that I not inappropriately get credit for an ASR article just based on that letter of mine. Perhaps I could’ve resolved the problem by suggesting that the editorial board publish the correction themselves without presenting it as a letter by me; then they could’ve fixed the literature without giving me any of that precious ASR publication credit.

      • jim says:

        Andrew:

        It wasn’t clear to me what kind of error it was. I presumed that it was an “error” as opposed to a “difference in interpretation or opinion,” and your summary seems to support that, although surely the distinction isn’t always clear. But reviewers acknowledged that your concerns were valid and, rather than rejecting your point, they presented reasons not to publish it.

        To me it makes sense that if the reviewers all agreed it was an error or significant oversight, then it should be published. How that’s done is a different question that’s important but doesn’t seem critical to me.

        The whole concept of communicating errors and research issues is emerging as a significant challenge for the research community (in all sciences). I guess with the advent of rapid (web) communication and the dramatically increasing use of research to support policy, scrutiny has increased a lot and the light it shines isn’t always flattering. People seem caught off guard by the idea that they should be subject to scrutiny.

        • jim says:

          Jim Said:”How that’s done is a different question that’s important but doesn’t seem critical to me. “

          Expanding on that: IMO it’s the editors responsibility to get the information out. They should just make a decision on what form publication will take. If the interested parties have complaints or concerns about the form, fine, they can take that question back to the eds and refine the process but the first step is to publish the info.

        • Martha (Smith) says:

          Jim said,
          “The whole concept of communicating errors and research issues is emerging as a significant challenge for the research community (in all sciences). I guess with the advent of rapid (web) communication and the dramatically increasing use of research to support policy, scrutiny has increased a lot and the light it shines isn’t always flattering. People seem caught off guard by the idea that they should be subject to scrutiny.”

          I came to statistics from mathematics — and there, scrutiny of one’s reasoning is part of the culture, part of the subject matter. When teaching mathematics, I told students in the first day handout that unless a problem on an exam or homework is marked “short answer”, they need to show their reasoning; they can’t expect full credit (or maybe any credit) on a problem if their reasoning is absent or inadequate or faulty. Whenever classes are small enough, I have students present their solutions in front of the class, and ask the class to critique the presentation. When I taught statistics, I followed the same philosophy.

          A lot of student didn’t like the degree of accountability, but there were enough of those who came back later to tell me that although they didn’t like it at the time, they later realized that it was a good thing to expect of them, that it helped them learn, not only in my class, but also in later classes.

          • jim says:

            Yes it’s interesting how different disciplines view that sort of thing.

            I was surprised to find out in grad school that so many people in science as a whole frown on being asked for their data, and they *especially* frown on giving it to non-academics. There seems to be a sense that within the academic club there are ties that prevent people from too much trash talking, but if it gets outside the club, the person who gets it might be insulated from retribution.

            “A lot of student didn’t like the degree of accountability, but there were enough of those who came back later to tell me that although they didn’t like it at the time, they later realized that it was a good thing to expect of them”

            This is so true and why I get kind worked up about universities and faculty getting too “student centered” about teaching. Students don’t know what they need! They’re there to benefit from the experience of people older and wiser.

            • Martha (Smith) says:

              There are (at least) two ways of being “student-centered” about teaching: You seem to be thinking of it as something like “Asking the students what they want”. But the interpretation I would encourage is “Focusing on student learning, including developing students’ understanding and their thinking skills, rather than focusing on presenting course material all laid out neatly by the teacher or textbook.”

              • jim says:

                Martha (Smith) says: “Focusing on student learning, including developing students’ understanding and their thinking skills”

                I endorse this most vigorously! :)

                But “student centered learning” is “a thing”, which more or less equates with “asking students what they want”:

                “In a student-centered learning space, students choose what they will learn, how they will pace their learning, and how they will assess their own learning. This is in contrast to traditional education, also dubbed “teacher-centered learning”, which situates the teacher as the primarily “active” role while students take a more “passive”, receptive role. “

                One cannot be passive and learn very much, so the idea of an “active” teacher and “passive” student is a caricature or strawman depiction of traditional education. And I don’t know how students would know what they need to learn or how they need to learn it – as you pointed out, they often don’t like what turns out to be good for them.

                So I do get a bit fired up about that.

  4. jim says:

    If an error is discovered in a published paper, the journal that published the paper should *always* notify its readers immediately, regardless of whether or not the error seems to impact the results.

    The comment and reply section is for discussion of open scientific issues. It should be distinct from scientific errors. Errors should be published in a separate section where the error is clearly identified in neutral language. Whatever discussion there might be about the impact of the error can be held in the comment and reply section at the judgement of the editors, but the error itself should *always* be identified by the journal.

  5. Justin says:

    Great post, Andrew.

    Jim, I agree with you; editors should notify readers immediately when a serious error is discovered.

  6. Justin says:

    Great post, Andrew.

    I agree with Jim: Journals should notify readers immediately when a serious error is discovered.

  7. Garnett says:

    Is the financial well-being of the journal or publisher harmed by retraction or admission of error?

    I found an important typo in one of my papers, and notified the editor. In the erratum the journal wanted me to state explicitly that it was my mistake and that the journal was in no way responsible.

    • Dale Lehman says:

      Of course there may be financial/reputational consequences for a journal – and there should be. However, in the case being discussed here the journal cannot claim that the errors are totally the responsibility of the authors. Journals should be held accountable for their peer review and editorial policies, and it seems like those are the issues highlighted in this particular case (at least as important as the authors’ responses to any errors they might have made).

  8. Michael Nelson says:

    As members of an important, cross-disciplinary field, most methodologists and statisticians care about the perceived and actual quality of quantitative analyses in published work. But, to my knowledge, we have no collective organization that has as its chief priority promoting actual/perceived quality, or even just preserving an accurate record of criticisms for posterity. Professional societies generally have a conflict of interest in that they publish their own journals and/or have conflicting sponsorships. Retraction Watch is great, really amazing, but it speaks volumes that it is a non-profit funded by charitable foundations and public donations. Maybe that’s the only way to prevent conflicts of interest, but they have no institutional authority, and there is no formal structure to preserve their role or their past work.

    It’s a conundrum.

  9. Klaas van Dijk says:

    Several readers of this blog are aware that I am working together with others to get retracted a fraudulent study on the breeding biology of the Basra Reed-warbler. There is no progress for already a very prolonged period of time, because (1) all relevant parties refuse to accept that the raw research do not exist, (2) all relevant parties refuse to start with a solid debate / dialogue about the whereabouts of these non-existing set of raw research data, (3) invariably no response and/or legal threats on queries to get access to this set of raw research data.

    I have in the meanwhile prepared a manuscript which is mainly based on findings in two reports which can be downloaded from https://osf.io/5pnk7/ The manuscript contains additional details and it has already been submitted to several peer-reviewed journals.

    Some responses from these journals are remarkable.

    Alan Lee, EiC of Ostrich https://www.tandfonline.com/loi/tost20 , wrote that he had no opinion about this case because, (a) he had never visited Iraq, (b) he has never visited Iran as well, (c) he had, towards the best of his knowledge, never observed a Basra Reed-warbler.

    Dominic J. McCafferty, EiC of IBIS https://www.bou.org.uk/ibis/ and https://onlinelibrary.wiley.com/journal/1474919x wrote: “Secondly, by focusing on a specific case here, then I would be concerned Ibis could become embroiled in a legal argument with another publisher.”

    Pippa Smart, EiC of Learned Publishing https://onlinelibrary.wiley.com/journal/17414857 wrote: “The reviews and reason for rejection make no mention of the existence, or not, of the Al-Sheikhly research data and the reason for rejection of your article was unrelated to their existence or not.” Reviewer Z wrote a.o. “Perhaps it is a shortcoming and/or failing of academia and the research publishing industry that no formal conclusion has been reached on this situation after six years.”

    This response from Pippa Smart, EiC of Learned Publishing, was received after I had filed an appeal about a rejection which was towards my opinion unfair and biased.

    I had listed 1000 points in my appeal. This response from Pippa Smart refers to point 2 of my appeal (‘Pippa Smart, EiC of LP, is refusing to accept that the raw research data of Al-Sheikhly et al. (2013) do not exist.’). Points 7 – 1000 of my appeal were similar (‘a refusal to accept that the raw research data of Al-Sheikhly et al. (2013) do not exist’).

    Pippa Smart responded to points 7 – 1000 with: “See comment on point 2”.

    See https://osf.io/6m3cq/ for the first version of a Letter to the Editor. It took one and a half year to get this published as https://www.tandfonline.com/doi/full/10.1080/09397140.2015.1023424 (and with a so-called rejoinder at https://www.tandfonline.com/doi/full/10.1080/09397140.2015.1023426 ).

  10. Magikarp says:

    Cough CRU cough

    [https://www.lavoisier.com.au/articles/greenhouse-science/climate-change/climategate-emails.pdf]

  11. Jordan Anaya says:

    Pickett has a new article out that includes personal communication among the various authors: https://econjwatch.org/File+download/1139/PickettMar2020.pdf

    There’s a lot there, you might particularly like the bit about data thugs.

    • Andrew says:

      Jordan:

      Yes, I saw this and wrote a post about it, should appear in Oct. Pickett’s story is Kafkaesque and not surprising at all. I imagine that if someone like Robert Sternberg saw it, he’d instinctively side with the people who did the coverup, and he’d wonder why Pickett didn’t just play ball. Why would he want to remove a line on his own CV?

Leave a Reply