Journals like to pretend that the impact factor is like a batting average in sports. But it’s not.
Sports fans love statistics. In baseball and cricket, probably no statistic is followed more closely than the batting average. In baseball it’s determined by hits/at bats; in cricket by runs/outs. In both sports, the outcome is essentially the same – a single number that gives a decent reflection of an individual’s performance. High batting average = good individual performance.
The batting average is a descriptive statistic, a simple calculation that gives no sense of the spread. This fact is perhaps more intuitively obvious with the example of cricket (how often can you say that?): a batsman who scores 0, 0, 0, and 100 will have the same batting average (25) as one who scores 25, 25, 25, 25.
With sports, that potential for a very skewed distribution is relatively low. While individuals’ performances do fluctuate, a player’s form is fairly constant over a given time period and they get plenty of opportunities to bat, so the average is a decent snapshot of their hitting in any given season or period. Hence, the statistic is closely followed and taken as a reliable measure of the individual’s batting.
Scientific journals and their publishers have become indecently besotted with the impact factor (IF), a single-figure statistic that at first glance would appear to have much in common with the batting average. A journal’s IF is calculated by totting up the citations that each paper receives in the two years following its publication, and then calculating the mean number of citations per paper.
Seems simple, right? A high IF means the papers in a particular journal are being heavily cited, and – so the logic goes – must therefore be of greater quality, greater interest, and wider impact.
(For now, let’s leave aside the question of whether citations themselves should be taken as an indication of a paper’s worth – and there are many reasons why they shouldn’t be).
So a high IF means the journal publishes good work, and if you publish in that journal then your work must be higher impact and it will attract loads of citations?
No. The journal average as depicted by IF says nothing about the quality of individual papers.
In fact, citations distributions for journals tend to mirror the less likely case of the batsman who gets 0, 0, 0, and 100. Because IF is an arithmetic mean, a small number of very highly-cited papers will drag the average up. Most papers – even in journals with highest the IFs – get roughly the same number of citations (in fact, around three-quarters of the papers published in a journal will receive fewer citations than the journal’s IF).
Secondly, and more importantly, batting averages look at the performance of individuals, while the impact factor is an aggregate from publications by many different research groups. It is obvious that having your paper published in the same issue as a paper that gets 700 citations will have no influence on the quality and reception of your work, but that’s precisely the logic that publishers are trying to feed authors when they promote the impact factors of their journals.
It is disingenuous, probably even dishonest, to advertise IF to authors. The IF has nothing to say about individual scientists, their work, or their publications. Equally, it is not just lazy and irresponsible, but arguably negligent to assess an individual scientist’s work or the value of one of their publications based solely on where it’s published.
The backlash against the IF is now well under way, but it remains incumbent on individual scientist’s to reject this flawed measure, and hit it out the park.
One thought on “Impact factors and batting averages”