I owe it all to the haters

Sometimes when I submit an article to a journal it is accepted right away or with minor alterations. But many of my favorite articles were rejected or had to go through an exhausting series of revisions. For example, this influential article had a very hostile referee and we had to seriously push the journal editor to accept it. This one was rejected by one or two journals before finally appearing with discussion. This paper was rejected by the American Political Science Review with no chance of revision and we had to publish it in the British Journal of Political Science, which was a bit odd given that the article was 100% about American politics. And when I submitted this instant classic (actually at the invitation of the editor), the referees found it to be trivial, and the editor did me the favor of publishing it but only by officially labeling it as a discussion of another article that appeared in the same issue. Some of my most influential papers were accepted right away but most of them faced serious opposition on the way to publication.

Colleagues of mine have had similar experiences with their favorite pieces, and I’ve long held the comforting belief that these works were just ahead of their time, that the silly reviewers didn’t get the point of our path-breaking research.

But then I thought, maybe the causal arrow goes in the other direction (as we say in the social sciences). Those endless annoying edits we need to make in response to those stupid referees . . . what if these edits actually make our articles better, clearer, more readable, even stronger in the substance? (I know it did in my 2006 paper: somewhere in the revision process I added a whole new section that made the research much more general and interesting.) Maybe if the original editors of these papers had been more reasonable, these papers would have been of lower quality and less influential. Scary thought, that. The effort taken to clarify an argument in the face of skeptical reviewers can pay off in making the published article more acceptable to others.

P.S. The negative reviewer of our 1992 paper hated my collaborator and me personally (or so I’ve heard) and thought the paper was crap (of course), although for some reason he never got around to contributing to the ensuing discussion in the journal. I suppose it’s easier to trash something anonymously than to form a cogent criticism that’s worth signing under your own name. Maybe if he could have done it all over he would’ve given our original submission a rave review so as to deprive us of the opportunity to improve it in the revision!

9 thoughts on “I owe it all to the haters

  1. As a martial arts instructor once told me – “the way to ruin a fighter [as a coach] is to tell them they are very good”.

    Skeptical reviewers are very helpful (if imagined, no mater how difficult, as _trying_ to be helpful).

    (And that instant classic caught the WinBugs folks with their guard down, at least the one I emailed it to.)

  2. I agree with the ambivalence of Gelman's post toward the "haters": many of those stickler referee reports really ultimately improve a paper. But for those cases, I hope they are rate, when blanket rejections are due to a reviewer being dead set against permitting a position to see the light of day, especially if it might undermine their own arguments or pet position, then I say it is intellectual dishonesty that violates scholarly norms/progress. I'm sorry to say that in philosophy of science, anyway, there are some journals that seem to have allowed reviewers the power to exercise an automatic knee-jerk opposition to anything in support of frequentist statistical foundations–however carefully argued. Even in analyzing a historical episode that happens to have been based on significance testing, I have seen authors forced to first demonstrate that the episode could not have been reconstructed Bayesianly. In those cases reports are dismissive and not helpful in the least. In that connection, while indirect, I was glad to hear Cox give a boost to frequentist statistical foundations in his talk today at the ASA meeting!

    • I'm not sure I understand the analogy with the measure theory course, but I'm impressed my remark was not rejected as a knee-jerk reaction!

      • Admittedly, a bit idiosyncratic…

        1. Andrew has admitted to never having studied measure theory so thought he might enjoy the link.

        2. The post and your comment were highlighting (perhaps poorly motivated) dismissive reviews. My comment had suggested the value of pretending the dismissive reviews were actually trying to be helpful (old advice from one of economics colleagues). Afterwards I came across the measure theory link and thought “this can’t possibly be sensible”. Then it occurred to me that I should extend the advice and pretend to take the link I found as _silly_ as somehow being very sensible.

        As an aside, I know there is academic racketeering in all disciplines, but I agree with zhai2nan2, that it is usually just people thinking they really are much cleverer. And very occasionally they are – and even if not – you can become cleverer by not doubting that. (And though “you can’t doubt what you don’t doubt” you can pretend something is true even if you suspect it isn’t?)

        Hopefully David’s talk will become available online.

        And for disclosure, I found the measure theory link looking to compare what short courses were accepted given mine (below) was rejected without any review comments :-(

        Title:

        EXPLAINING AND DISPLAYING COMPLEX META-ANALYSES AND COST EFFECTIVENESS ANALYSES USING CURVES FROM SIMULATIONS.

        Course Type:

        Half Day

        Course Level:

        Advanced

        Background:

        This course will first explain, using Galton’s intuitive physical model of Bayesian Analysis (Stigler,JRSSA2010) how posteriors arise from priors and data models and how complex likelihoods for any parameter of interest can be directly obtained from posterior/prior. It will then be shown how to do this for simple meta-analysis examples using direct simulations. Afterwards, MCMC sampling methods from R software will be used to demonstrate obtaining similar priors and likelihoods from complex meta-analyses and cost effectiveness analyses that can then be plotted. A frequentist based analogue will then be covered for obtaining similar plots without explicitly making prior assumptions. An distinct advantage here is that the resulting likelihoods can be split into individual observations so that the consistency/heterogeneity of these can also be graphically inspected. An approximate Bayesian method for doing the same will then be demonstrated. The construction of such plots will help explain and display complex analyses.

  3. 'But for those cases, I hope they are rate, when blanket rejections are due to a reviewer being dead set against permitting a position to see the light of day, especially if it might undermine their own arguments or pet position, then I say it is intellectual dishonesty that violates scholarly norms/progress. '

    Intellectual dishonesty is alienating. (See Brian Martin's many works on intellectual dishonesty in science.)

    'I have seen authors forced to first demonstrate that the episode could not have been reconstructed Bayesianly. In those cases reports are dismissive and not helpful in the least.'

    Academics are very clever. The dark side of cleverness is a tendency to dismiss matters as unworthy of attention. This is a necessity for abstract thought, but it can get abusive.

    On a personal note – the harsh rejections that tell me where I went wrong don't hurt as much as the trivial rejections that say, "Yeah, we can tell just by glancing at this that it wouldn't be worth our time to read it."

  4. Obviously there are helpful reviewers and there are not. The question is how partitioning between the two categories looks like. It probably varies between fields and, to some extent, between journals within a field. Personally, I feel that the balance of it all is that reviews suck and having no reviewers would ultimately be more fruitful.

  5. Pingback: The importance of style in academic writing « Statistical Modeling, Causal Inference, and Social Science

Comments are closed.