Hey! Here’s a referee report for you!

I just wrote this, and I realized it might be useful more generally:

The article looks reasonable to me–but I just did a shallow read and didn’t try to judge whether the conclusions are correct. My main comment is that if they’re doing a Poisson regression, they should really be doing an overdispersed Poisson regression. I don’t know if I’ve ever seen data in my life where the non-overdispersed Poisson is appropriate. Also, I’d like to see a before-after plot with dots for control cases and open circles for treatment cases and fitted regression lines drawn in. Whenever there’s a regression I like to see this scatterplot. The scatterplot isn’t a replacement for the regression, but at the very least it gives me intuition as to the scale of the estimated effect. Finally, all their numbers should be rounded appropriately.

Feel free to cut-and-paste this into your own referee reports (and to apply these recommendations in your own applied research).

13 thoughts on “Hey! Here’s a referee report for you!

  1. That was brilliant.


    It seems to me that Poisson regression is the most misunderstood and/or misapplied technique in my area (epidemiology) although I'll admit that it is a competitive category.

  2. Why bother providing such shallow and non-specific reviews of papers? I'd be very disappointed to receive a review like this of something that probably took a fair amount of work. I think when someone agrees to review they are signing up to thoroughly read the paper and provide their informed opinion about whether the work is correct, sufficiently noteworthy and relevant to be published in the targeted journal and how it might be improved. Your review — "it looks awright, I'd prefer this and that" — seems to only barely address the last of these (but seeing as you only glanced at it, you could be wide of the mark).

  3. Do they even count toward the total number of reviewers with a review this short?

    I'd be happy to do a lot more reviews if I could get away with this length and depth.

    I'd really rather just send comments I think might be useful to the author(s) and let someone else split hairs on the publish/no-publish boundary.

  4. Speaking of Poissons and dispersion, how do you like to get your overdispersion, through the model or through Bayesian inference?

    With a Poisson model and Gamma prior, you get a negative binomial posterior predictive distribution. This seems to make sense to me if the overdispersion is from uncertainty rather than not modeling some important effect. Then with enough data, we converge back to a Poisson.

    Although you typically observe something pretty close to a Poisson with large sets of genomic sequencing data, an increasingly popular technique is to fit a negative binomial's mean and dispersion parameter (for, say, 30K genes) and make point inferences from that. The edgeR package for differential RNA expression uses "empirical" Bayes to estimate a prior that shrinks the dispersions toward a common value. They then reason with point estimates.

    Or should we go all out and find a prior for something like a negative binomial, then use proper posterior inference for even more dispersion?

  5. David:

    My review may be shallow but I still think it is useful. The editor asked me what I thought and I told him. The authors of the article seem to know a lot but they didn't know this important information about the Poisson model. (According to Joseph, a lot of people don't know this.)

    You write, "when someone agrees to review they are signing up to thoroughly read the paper and provide their informed opinion about whether the work is correct, sufficiently noteworthy and relevant to be published in the targeted journal and how it might be improved."

    Huh? I never signed up for that! If journal editors really think my reviews are so useful, they could stop asking me to review manuscripts. This hasn't happened yet.


    Indeed, I often do say no to referee requests. I'm on the editorial boards of several journals, though, and I hate to say no. I try to contribute as best I can. And, when I can post something useful on the blog, I'll do that too.

  6. My experience of publishing in a couple of medical journals lately is that the "normal" reviews are about as short as this (although they do usually make more of an effort to help the editor decide if the paper is worth publishing). It's the third, statistical reviewer that's the pain. S/he invariably writes more than the editor and other referees combined, usually in a patronising way, and often without checking details first. If only all statistical reviewers took a leaf from Andrew's book and chilled out a bit.

  7. Dear Andrew:

    Poisson regression is a fairly common tool in neuroscience applications, but in that setting the data are often underdispersed relative to the Poisson distribution.

    The phenomenon results from the fact that "spike counts" collected in neuroscience experiments arise from a point process that is more regular than Poisson. (This depends somewhat on the brain area and type of experiment, but certainly holds in early sensory areas).

    I don't have a handy reference for this, but see any recent work from Liam Paninski, Emery Brown, or Rob Kass; all three have worked on models that seek to capture the "sub-Poisson" variability of neural responses.

  8. Do you have an example (e.g., from this blog or BDA) of the scatterplot you're describing? I can't quite visualize it.

  9. Yes, you see this type of response often.

    When I became an electrical engineer and joined my respective professional organization, I was excited. Our organization spouted the virtues of mentoring new members and providing guidance for unique challenges young engineers would face in their new profession. My enthusiasm significantly soured as my interactions with senior members demonstrated a lack of heed to these mandates.

    I am sure there are those keenly aware of the importance of due diligence for requests of their time, like paper reviews, and take complete ownership when they agree to do it. It is, after all, their time. The problem is… individuals like this are few-and-far-between.


  10. Michael:

    Millions of scientific articles are submitted to journals every year. If all the reviewers were doing the "due diligence" of which you speak, I think that no work would ever get done!

    In any case, for clarity I will point you to my latest blog entry.

    You might also be interested to know that I sent my review in response to the following request from the journal editor:

    I [the editor] will be interested in your opinion of the article and any suggestions for improvement.

    It was an article that the editor had invited the authors to submit, and I sent my comments as requested, and the next day the editor thanked me.

    I posted my review on the blog because I thought the points about the Poisson regression and the before-after plot were general enough that they might help some of the thousands of readers of this blog as well as the two authors of that particular article. I don't see why I have any professional duty to restrict my thoughts to narrow audiences.

  11. Andrew,

    Oh, I. I see where you're coming from, me lad.

    Thank you for work and this blog. I enjoy reading your posts and glean a bit-of-something from each.


Comments are closed.