Impact factors

A bunch of years ago, I published an article (using some of the material in my Ph.D. thesis) in the Journal of Cerebral Blood Flow and Metabolism. It’s ranked as the #25 journal in neuroscience, and has a pretty crappy impact factor of 5.7.

By comparison, the impact factors of the top statistics journals a few years ago were:
JASA 1.6, JRSS 1.5, Ann Stat 1.3, Ann Prob 0.9, Biometrika 1.8, Biometrics 1.1, Stat Sci 2.0, Technometrics 1.3.

So now you know why statisticians don’t like impact factors.

8 thoughts on “Impact factors

  1. What's the ratio of citations (impact factor) per coauthor across the fields? What about citations per dollar of grant money?

  2. Impact factors (IFs) aren't designed to compare across disciplines: they're meant to compare within disciplines. If you're in a skint university whose library can only afford 10 stats journals, you might use IFs to decide which to buy. It doesn't always work, eg the highest impact stats journals are usually bioinformatics and biostatistics, which most methodological statisticians probably don't rate as highly as JRSSB or JASA. But you definitely shouldn't use IFs to compare research output, for which sadly they often are used.

    Having said that, I think we statisticians should take some of the responsibility for the low impact factors in the discipline. My impression from reading papers in stats and biology is that we statisticians often don't follow scientific norms in citing practices. A colleague once told me that if he thought something was "well known" he didn't bother providing a reference. Of course, what is well known to one person is un-known to another.

    If we want to do better when administrators compare our discipline to others using IFs, we should learn from how other scientific disciplines cite the work of others.

  3. Alex: Point taken. But still . . . the #25 journal in neuroscience having well over twice the impact factors of any of the top statistics journals. This is a huge discrepancy.

  4. Impact factors only count citations in the 2 calendar years following the publication of an article. This right-censoring heavily penalizes statistics journals.

  5. quantity vs. quality?

    There's a lotta crap getting published out there. And we're probably more guilty of it in the biomedical sciences than you are in the mathematical sciences.

    And you must remember that, when calculating impact factor, a citation is a citation, no matter what journal it's in. This presents complications when you consider the range and variability of scientific quality of all the published research out there.

    Just imagine how many blogs there are on the Internet. Now, imagine if Google indexed them using an algorithm akin to Impact Factor rather than PageRank.

    Quality matters.

  6. IF, well, is a good measure of popularity. The IFs of some top journals are dropping. This is because they become more selective. Meanwhile, some new journals observe increasing IFs, because it is easier to publish papers in these journals. Some papers in leading journals are rubbish and some papers in new journals may be precious. Forget about IF, just, do what you like and you can.

Comments are closed.