Hey—Don’t trust anything coming from the Tri-Valley Center for Human Potential!

Shravan sends along this article by Douglas Peters and Stephen Ceci, who report:

We selected 12 already published research articles by investigators from prestigious and highly productive American psychology departments, one article from each of 12 highly regarded and widely read American psychology journals with high rejection rates (80%) and nonblind refereeing practices.

With fictitious names and institutions substituted for the original ones (e.g., Tri-Valley Center for Human Potential), the altered manuscripts were formally resubmitted to the journals that had originally refereed and published them 18 to 32 months earlier. Of the sample of 38 editors and reviewers, only three (8%) detected the resubmissions. This result allowed nine of the 12 articles to continue through the review process to receive an actual evaluation: eight of the nine were rejected. Sixteen of the 18 referees (89%) recommended against publication and the editors concurred. The grounds for rejection were in many cases described as “serious methodological flaws.”

Amusing. On the plus side, it could reflect a positive trend, that crappy papers that were getting accepted 2 years ago, would get rejected now.

20 thoughts on “Hey—Don’t trust anything coming from the Tri-Valley Center for Human Potential!

  1. I assume “nonblind refereeing practices” means that the authors’ names and affiliations are provided to the referees. That is a huge influence on the referee. Junior referees do not feel comfortable recommending the rejection of a paper by an established researcher. The referees have professional relationships with the author and know the author’s publication history. Take the same paper and change the author to Mr. Nobody from the never-heard-of Tri-Valley Center for Human Potential, and the same referees read the paper with a more discriminating eye.

    Of course, blind refereeing is not a panacea. In many cases a referee can perform a frequency analysis of the bibliographic citations. The author whose work is cited most often is probably a co-author of the paper!

    • I think that most relatively respectable journals these days use automated plagiarism detection software to screen new submissions. This should mean that any copy and pasting of large sections of text would be picked up by the editorial office.

      • I think the thing to do would be to take recently accepted/newly forthcoming papers and then submit them to other journals. Thus, the submitted material would 1) not show up in plagiarism detection databases and 2) be very unlikely to be recognizable to referees as a resubmission of published work (even those familiar with the paper would be unlikely to know that it had been accepted).

        Of course, it’s kinda crappy to waste reviewers and editors time to prove the point that peer review is capricious. Anyone who has been in academia for more than 5 minutes already knows that this is true.

      • Recent experience leads me to believe that these plagiarism detectors are either not as good as they claim to be, or not as regularly employed as the journals claim. Or maybe they just run the software only after the reviewers have already approved it (which obviously seems like a big waste of reviewer time).

        • My experience of using one of the main options for plagiarism detection software is that it is pretty effective at matching up any copied sections of text (even very short sections, and even with some paraphrasing). However, as you suggest, a journal needs to be well managed to ensure that such tools are used appropriately.

  2. Yes, this has been replicated as recently as NIPS 2014. See John Langford’s blog post, The NIPS experiment. Neil Lawrence and Corinna Cortes co-chaired NIPS in 2014 and ran a controlled experiment. The takeaway message (without uncertainties):

    Let’s give P(reject in 2nd review | accept 1st review) a name: arbitrariness. For NIPS 2014, arbitrariness was ~60%.

    • This isn’t quite the same. From my quick glance, it appears that the NIPS experiment isn’t changing the affiliation of the submitted papers, but rather seeing if the the same paper, previously accepted by a journal, is accepted again if submitted again. They found P(reject in 2nd review | accept 1st review) = 0.6, as you note.

      In Peters and Ceci study, according to the abstract, the stated affiliation of the authors was changed for the second submission, to something less prestigious than [fancy university]. This drops P to 0.1. I haven’t read the actual paper, and so don’t know if there was also a control with unchanged affiliations.

      Perhaps we should tell these people to change their name: http://harris.uchicago.edu/centers/chppp

  3. A little off-topic but worth reading: Ron Drigggers, “Think Twice Before Declining A Manuscript Review Request” –
    https://www.osapublishing.org/ao/fulltext.cfm?uri=ao-54-16-ED6&id=319473

    All: What fraction review requests have you declined? I get probably two requests per year. That’s been the case for about 15 years. I’ve declined maybe 3 or 4 times – all cases where I lacked the expertise needed to provide a rigorous review. So I figure my acceptance rate is in the vicinity of 90%. I’m almost always a week or two late submitting my reviews but I get them in. It seems like professional courtesy to do it – presuming you’re not getting an unreasonable number of requests and a couple requests a year seems reasonable to me.

    • I get 1-2 requests to review per month, and set a limit for myself to only review 1 paper per month. My refusals are usually because I’ve hit my 1/month cap, or because I felt I lacked expertise in the area. My reviews tend to be longer than average, and take considerable time and effort. It is definitely an odd system, worthy of its own post some time…

    • Chris, Raghuveer:

      Yes, I’ve posted something on this . . . ummm, let’s do a search . . . nearly 10 years ago! I haven’t kept track of how many requests I get, maybe 3 a week? I review a lot of them—I just received a paper to review for Public Opinion Quarterly, which I think I’ll say yes to, so that will actually be 2 referee reports this week.

      As noted in the linked post, I just about always do my review within 15 minutes. The review is short, and of course the editor is aware that my review is short, so he or she can use that information accordingly.

      I have no problem with other people spending more time on these reports; I just think that I will make a more useful contribution to science and society by doing 100 ten-minute reviews rather than a few reviews that take hours each.

      Of course, writing this comment took 10 minutes, but I think it’s a useful contribution too!

      • Three a week?! Wow. I didn’t have a specific number in mind when I wrote “an unreasonable number of requests” but three per week is definitely well above threshold. One per month feels like an upper bound on “reasonable” if the review gets into the weeds. (15 minutes per review seems consistent with doing a few per week.) For what it’s worth, I pretty much follow Cowen’s guidelines. If I were to write my own, his Point #2 would probably be Point #1. I probably spend about 4 hours per review.

        • One more thing: In a better world, every manuscript would receive a few detailed reviews and a handful of quick reviews. Towards that end, I like the idea of pre-publication review, i.e., post your manuscript on-line and take comments from interested parties.

      • 15 minutes? I almost spit out my coffee reading this this morning. At least it’s qualified in your original post with “On the other hand, it would probably be a disaster if all referees followed my approach.” I’m curious how much time academics in general spend per review — I’ve never seen data on this. (My reviews take a few hours.) I realize this is buried in a comment thread on a days-old post, but if anyone feels like commenting with a datapoint at the bottom of this blurb I just wrote, please go for it: https://eighteenthelephant.wordpress.com/2015/08/18/in-the-future-my-reviews-will-consist-solely-of-one-carefully-picked-emoji/

Leave a Reply to Tom Dietterich Cancel reply

Your email address will not be published. Required fields are marked *