I posted this as a comment on a sociology blog

I discussed two problems:

1. An artificial scarcity applied to journal publication, a scarcity which I believe is being enforced based on a monetary principle of not wanting to reduce the value of publication. The problem is that journals don’t just spread information and improve communication, they also represent chits for hiring and promotion. I’d prefer to separate these two aspects of publication. To keep these functions tied together seems to me like a terrible mistake. It would be as if, instead of using dollar bills as currency, we were to just use paper, and then if the government kept paper artificially scarce to retain the value of money, so that we were reduced to scratching notes to each other on walls and tables.

2. The discontinuous way in which unpublished papers and submissions to journals are taken as highly suspect and requiring a strong justification of all methods and assumptions, but once a paper becomes published its conclusions are taken as true unless strongly demonstrated otherwise.

The thread was called “so, have things changed at the ASR [American Sociology Review]?” and my comment was:

I am irritated that ASR published a paper with serious statistical flaws but then did not publish my letter pointing out the flaws. I think this is a systematic problem with journals, that the informal rules for publication (that findings be substantively important and statistically significant) bias things toward the publication of exaggerated claims. It goes like this: paper A makes a dramatic claim and is published. It turns out that paper A has methodological problems. But pointing out such problems is less exciting than the original large claim, of course. To say it again: I understand and appreciate the rationale for wanting to publish major papers with major claims. But I do think this introduces what one might call a bias toward drama. Perhaps it’s just my statistics training that gives me a bias toward boringness. I think it’s worth going back and spending the time to criticize things that have already been published.

It may be that in this particular situation my letter did not warrant publication (I think it did, but that’s my perspective), but in any case I think the reluctance to print criticisms is a major problem with lots of journals. Nothing specially bad about ASR here.

Related to this is the idea that journals don’t just spread information and improve communication, they also represent chits for hiring and promotion. From that perspective, I can see the attitude of, “We can’t just publish every critical letter, then people will do nothing but criticism as it’s so cheap compared to original research.” But that attitude irritates me because I wasn’t writing that letter to get a chit; I was writing the letter as a public service. I’d have no problems if critical letters were identified as such in the publication record so the whole chit issue wouldn’t have to come up.

Brayden King added:

My sense is that most journals rarely publish letters of response because they don’t like to use the print space. But this could be easily resolved in today’s age of online journals. Most readers of ASR don’t actually read the print version anyway. They read the articles online. It would be really easy to post letters, addendums, appendices, etc. directly underneath the original article in the online journal. There would be some cost, of course, including editorial services, but I think it would be well worth the investment.

I responded to Brayden that I completely agree that online publication should be no problem. But I think the issue is not just physical space in the journal (or even the time it takes for the journal staff to edit and proofread the articles). I think the “chit” issue also comes into play.

The story of my experience at ASR is in my article, It’s too hard to publish criticisms and obtain data for replication. As I wrote, I don’t fault the author of the original article–we all make mistakes–and in some sense I don’t fault the ASR either, as they’re following their policy. The reason they gave for not publishing was that the reviewers and editor agreed that it was not important enough to warrant publication. They also had some specific criticisms of my letter but I think the non-importance was key. I demonstrated that the article had statistical flaws but I did not demonstrate that the flaws would have serious impact on the article’s major conclusions. I think I could’ve done this but it would’ve required more work, and I was already having lots of problems getting the data (not the fault of the author of the article, it was a problem with the keepers of the dataset).

Here’s what I wrote in my article about the episode:

The asymmetry is as follows: Hamilton’s paper represents a major research effort, whereas my criticism took very little effort (given my existing understanding of selection bias and causal inference). The journal would have had no problem handling my criticisms, had they appeared in the pre-publication review process. Indeed, I am pretty sure the original paper would have needed serious revision and would have been required to fix the problem. But once the paper has been published, it is placed on a pedestal and criticism is held to a much higher standard.

Again, the point is that had my comments been in the form of a referee report, they would have to have been addressed, there’s no way the article could’ve been published as is. As a referee, I would not need to offer an independent data analysis and proof that the statistical error would have a major effect on the conclusions. It would’ve been enough just to point out the error. But once the article appears, the burden of proof is reversed. And I think that’s too bad. I think it would be appropriate to publish my letter (and I’d have no problem if, in the review process for my letter, I’d been told to add a paragraph emphasizing that I had not demonstrated that the statistical error had a major effect on the conclusions).

21 thoughts on “I posted this as a comment on a sociology blog

  1. When a letter that shows flaws about a published paper is submitted, it shows faults with both the paper and the review process. This at least presents the image that the journal is trying to save face by not publishing. Sometimes it is the same editor who wrote that final (almost form) acceptance letter that says how happy they are to publish the wonderful paper that deals with the letter basically saying the editor was wrong. Do you think it would help having a different editor just in charge of letters?

    • As I wrote that note, this arrived in my inbox from the Journal that published Bem’s ESP paper. It says the criteria include “importance of the finding being replicated” but does not discuss failures to replicate or critical letters.

      New Policy for the Journal of Personality and Social Psychology: Attitudes and Social Cognition

      The Journal of Personality and Social Psychology: Attitudes and Social Cognition is inviting replication studies submissions. Although not a central part of its mission, the Journal of Personality and Social Psychology: Attitudes and Social Cognition values replications and encourages submissions that attempt to replicate important findings previously published in social and personality psychology. Major criteria for publication of replication papers include the theoretical importance of the finding being replicated, the statistical power of the replication study or studies, the extent to which the methodology, procedure, and materials match those of the original study, and the number and power of previous replications of the same finding. Novelty of theoretical or empirical contribution is not a major criterion, although evidence of moderators of a finding would be a positive factor.

      Preference will be given to submissions by researchers other than the authors of the original finding, that present direct rather than conceptual replications, and that include attempts to replicate more than one study of a multi-study original publication. However, papers that do not meet these criteria will be considered as well.

      • It sounds like this journal has jumped on the “bandwagon” started by the journal Social Psychology, with it’s recent special issue(http://www.psycontent.com/content/l67413865317/?p=1dd5ebc505f9492c8fb93be3f7c713f3&pi=0) featuring “registered reports” (http://www.psycontent.com/content/311q281518161139/fulltext.html).
        Although replications are important, this movement as it currently seems to be constituted seems to focus on “direct replications,” interpreting this as using the same methodologies as in the paper whose results are being re-studied with new data. This does not provide a mechanism for improving the methods in the original study – in other words, it is pushing aside the kinds of criticisms that Andrew is discussing. I’ve looked at a couple of papers in the issue, and believe that the original paper had some questionable practices (specifically: a questionable choice of outcome variable, using ANOVA without any discussion of whether the data gathered could reasonably come from a distribution that was close enough to the requirements of ANOVA to be able to rely on results of that method, and not accounting for multiple testing). These were not addressed in the replications either, in accord with the policy of replicating the methods used in the study.

        • Yes, indeed I think there is a value in publishing criticism, even if said criticism does not involve replication or demonstration that an improved analysis would result in substantively different claims. One of my struggles has been with the attitude that a paper in a major journal has to be a Big Deal and so people seem to feel that mere criticism isn’t enough to merit publication. This frustrates me because the result is that clearly wrong methods just stand there uncriticized, then new researchers can come along and consider these papers as templates for their work, etc.

        • Andrew:

          Agreed. What journals don’t seem to understand is that the really Big Deal is the perceived credibility crisis in empirical social science.

          I don’t trust what they publish anymore. And to top it all the whole publication process is an admixture of Chinese torture from the Song Dynasty (https://en.wikipedia.org/wiki/Slow_slicing) and medieval craft. Henry Ford would have a heart attack.

        • BTW, it’s not just social science that has these problems. I see a lot of it in biological science — but it’s not as bad as in the social sciences; there is more of an attitude of “criticism is OK” (e.g., in lab meetings or student seminars that discuss current or not-so-current papers). The “template” problem, as Andrew puts it, is still there, but more people are willing to challenge templates, especially if someone points out their flaws. Or, to put it more personally, if I point out a problematical use of statistics to a biologist, they are more likely to listen and take what I say seriously than a social scientist is; they are less likely to react by dismissiveness or as if I have shaken their belief in God. I guess there’s just more of a sense of humility in biology than in the social sciences.

        • Martha,

          I think that what you say may be true in a lot of the social sciences, but I don’t think its true in all of them. In particular, in my field of empirical economics, methodological questions and prodding are commonplace and people regularly re-analyze their data in response to particular points of methodological criticism. It is part of the informal peer review process that goes on before papers are submitted/published as well as during formal refereeing, and it is a (if not the) cornerstone of our training. My entire methods course was basically built around looking at recently published papers (say last 10 years or so) and critically evaluating them – none were thought to be right or perfect, just better or worse, and always with some room to improve.

          My experience with biologists is that they are also receptive to methodological criticism, but that getting good feedback is not super common for them. In my limited experience, it almost never comes up in seminars/presentations. I wonder if the “bad social science equilibrium” stems from people constantly getting mediocre statistical criticism from their peers, and so they are forced to develop a kind of reflexive defensiveness (to “stand by” their work and project confidence, or something). In my world, that will kill you in a seminar – you have to be able to respond clearly and thoughtfully, or admit you hadn’t thought about it, because most of the people in the room have deep statistical training – but that may not be true across the social sciences.

    • The review process is partly at fault because, in this connection, I think it is both misunderstood and misused. The review process is not to make a submitted paper perfect, or ground-breaking, or immune to criticism; its just to make sure it deals with its subject in recognition of the state of the field (well, that’s how I review papers) and does not make mistakes or omissions that it should not have with respect to the literature extant. If reviewers and editors had this view, then we’d get good debate, we’d flush out both truth and error, and people would learn more than they do with the current grandstanding approach that some journals seem to adopt.

  2. American Journal of Political Science (a major poli sci journal) used to have a section on comments on published papers back in the 1990’s. That feature later disappeared. I am curious about how that deciaion was made. It seems very relevant to this discussion.

    • The AJPS has a positively anti-replication policy:

      The American Journal of Political Science does not review manuscripts that:

      – Are unsolicited rejoinders to recently published articles;

      Interestingly AJPS has done very well in enforcing the submission of replication files. But this just shows that a replication policy cannot be limited to posting replication files. It must also involve journals taking full responsibility for the stuff they publish.

  3. In general, scarcity applied to journal publication is a good thing. Imagine a world where journal publication space is an infinite resource, it’d encourage publication of infinitely more crap.

    So I think Andrew is wrong to attack scarcity per se. It’s the allocation profile of a scarce resource that’s going wrong. We could be in an alternative scenario where publication space still remains scarce but a compelling letter / critique gets the space in preference to a mundane paper.

    If one merely wants to “publish” one can easily post a pdf of arxiv or scribd or his own website. Journals are more useful precisely because space is a scare commodity. I’d hate to read a non-selective, egalitarian journal.

    Too many people ignore a good journal’s curatorial / vetting function.

  4. @Rahul

    What is published is crap precisely because of the space limitation and curatorial function. That is what generates all the drama. Nothing stops you or anyone curating ArXiv. And if you decide to do so I’d encourage you to focus on the quality of the research design rather than dramatic findings.

    I think too many people trust the curatorial function of journals. Peer review need not be a good thing. Nothing necessary and sufficient about it. And it could make things much worse by saying “show me the significance”, “where is the dramatic finding” etc.

    • But arxiv already exists! So does scribd!

      Anyone is perfectly free to post their critique on such a platform if you hate journals not for extant policies but because fundaamentally you think curation = evil.

      • @Rahul

        Yes the problem is that bc people still think that what is published is of better quality then that is the standards for promotions etc which only leverages the problem.

        Ps I don’t think curation is evil. I think curation, quality assurance, and publication are distinct activities. Mixing them generates problems. Most reviews focus on interesting and significant and spend very little time on the details and veracity of research implementation. Yet consumers think peer reviw is mostly about quality assurance.

        To quote Jan Tinbergen “one target on instrument”.

  5. Andrew, I agree wholeheartedly that peer review and journals “as is” give wrong incentives to scientists. Especially sociologists should know that but the sociology of science seems far too much concerned with general deconstruction and too few sensible analysis of the concrete problems at hand. This again is probably partly caused by the wrong incentives: It’s so much more exciting to write and read that science is just some other ideology or justification system or “just rhetorics”. I don’t think we should throw out the baby with the bath but we still need to seriously analyze what problems occurs in “real” science not just in some imagined utopian or dystopian land.

    • “… peer review and journals “as is” give wrong incentives to scientists. Especially sociologists should know that…”

      Artificial-scarcity, gatekeepers, ideology vs science, etc. Just what are the real-world incentives/motives here, especially for a flagship sociology journal (ASR of the ASA) ?

      The American Sociological Association (ASA) is a non-profit corporation based in Washington, DC, claiming dedication to advancing sociology as a scientific discipline and professional service to the “public good”. Its 1905 founder, Lester F. Ward, promoted the introduction of sociology courses into American higher education. His core belief that ‘society could be scientifically controlled’ was especially attractive to intellectuals during the Progressive Era. Ward emphasized the importance of social forces which could be ‘guided at a macro level’ by the use of intelligence to achieve conscious progress, rather than allowing evolution to take its own erratic course.

      So the ASA mission is pure science — not a trace of political ideology or agenda in its organization, operations, or publications (?)

      What do sociologists objectively know about the fundamentals of their profession and its “as is” practices ?

      • Ehlbach:

        If we’re talking about sociology today, I don’t know how much you’re going to learn from a statement from 1905. Individual sociologists have their ideologies—generally left or center-left in the U.S. context; beyond this there is the research itself, which is sometimes ideological and sometimes not. And, indeed, many pieces of sociology research can be happily spun by partisans of the left and the right. For example, the paper by Hamilton that I discussed made the claim (fatally flawed by selection bias, I believe) that if you pay for your kid’s college, he or she will do worse (on average). This (flawed) finding was grabbed on by people on the left as evidence that money can’t buy everything, and by people on the right who are happy for any news that makes the educational system look bad. So it’s complicated. In any case, the fact that sociologists have ideologies does not and should not mean they can’t know something about sociology! Specialists in any field tend to have their distinctive ideologies, and I agree that this implies we can’t automatically treat a specialist as a disinterested observer, but we can still try to make use of their insights and expertise.

  6. Brayden King added:

    My sense is that most journals rarely publish letters of response because they don’t like to use the print space. But this could be easily resolved in today’s age of online journals. Most readers of ASR don’t actually read the print version anyway. They read the articles online. It would be really easy to post letters, addendums, appendices, etc. directly underneath the original article in the online journal. There would be some cost, of course, including editorial services, but I think it would be well worth the investment.

    A number of biology journals allow on-line comments. All journals should allow this. A number of biology journals also print the referee’s reports in the on-line version.

  7. Pingback: What’s in a name? Female hurricanes are deadlier than male hurricanes | theoretical ecology

Leave a Reply to Dan Wright Cancel reply

Your email address will not be published. Required fields are marked *