Home > News > Impact factor 911 is a joke
104 views 5 min 0 Comment

Impact factor 911 is a joke

- October 15, 2013

(Screenshot: ResearchGate.net)

(Screenshot: ResearchGate.net)


 
Brian Silver points us to an article, “Deep impact: unintended consequences of journal rank,” by Bjorn Brembs, Katherine Button, and Marcus Munafo. The article begins:

Most researchers acknowledge an intrinsic hierarchy in the scholarly journals (“journal rank”) that they submit their work to, and adjust not only their submission but also their reading strategies accordingly. On the other hand, much has been written about the negative effects of institutionalizing journal rank as an impact measure. . . . In this review, we present the most recent and pertinent data on the consequences of our current scholarly communication system with respect to various measures of scientific quality (such as utility/citations, methodological soundness, expert ratings or retractions). These data corroborate previous hypotheses: using journal rank as an assessment tool is bad scientific practice. Moreover, the data lead us to argue that any journal rank (not only the currently-favored Impact Factor) would have this negative impact. Therefore, we suggest that abandoning journals altogether, in favor of a library-based scholarly communication system, will ultimately be necessary. This new system will use modern information technology to vastly improve the filter, sort and discovery functions of the current journal system.

I like this paper a lot (possibly because I already had a high opinion of an earlier paper by Katherine Button). There’s been a lot of discussion in the last couple of years about reforming the peer review system (see, for example, here and here for some discussions in the context of political science), but the paper under discussion takes things to the next level. Their paper feels like the work of objective scientists rather than opinionated participants in the system (which is how I typically feel when having these discussions; I have a lot of ideas which seem reasonable, but no systematic way of evaluating these ideas).
As I wrote last year, the current system of scientific publication has obvious problems; one result of this is almost anything can seem like a good solution. It’s sort of like education reform: choose Back to Basics, or Student-Centered Learning, or whatever — any of these ideas could be good, but it depends on their implementations.
And Silver had the following reaction to the Brembs et al. paper:

One issue that I have with the study that I sent you to is that it never makes clear which academic fields its data come from.
My guess is that there is considerable heterogeneity across scientific disciplines. Specifically I also know that the mean IF (impact factor) varies enormously across fields. Generally very low in softer social sciences, higher in harder ones including Psych. I think the same is true of citation ratios. And citations may matter a lot more for some fields than for others — the returns per publication, per IF score and per citation in terms of the probability of professional success (salaries, promotion and tenure) may also vary greatly.
So they need to use a hierarchical model. This would be true even if they stuck to medical sciences. Or to physical and mathematical sciences. There are enough journals in each academic discipline (and many subdisciplines) to learn something (to get stable estimates), but they shouldn’t compress them all together.

Yes, I’ve noticed the variation in impact factor. The highest-rated statistics journals have impact factors of around 3, which correspond to run-of-the-mill biology journals.
P.S. I have nothing in particular against the journal shown in the image above. I just wanted to use this cute title so I searched “impact factor 9.11,” guessing correctly that, with so many journals out there, there’d be at least one with this particular value.