I’ve written about journal impact factors before, largely to argue that there are better statistics than the traditional impact factor. But an excellent editorial in the Oct. 10 issue of Science by Kai Simons points out a very obvious problem with how impact factors are used (italics mine):
Research papers from all over the world are published in thousands of Science journals every year. The quality of these papers clearly has to be evaluated, not only to determine their accuracy and contribution to fields of research, but also to help make informed decisions about rewarding scientists with funding and appointments to research positions. One measure often used to determine the quality of a paper is the so-called “impact factor” of the journal in which it was published. This citation-based metric is meant to rank scientific journals, but there have been numerous criticisms over the years of its use as a measure of the quality of individual research papers. Still, this misuse persists. Why?
That really is a basic misuse of the statistic, particularly when you consider the following:
This algorithm is not a simple measure of quality, and a major criticism is that the calculation can be manipulated by journals. For example, review articles are more frequently cited than primary research papers, so reviews increase a journal’s impact factor. In many journals, the number of reviews has therefore increased dramatically, and in new trendy areas, the number of reviews sometimes approaches that of primary research papers in the field. Many journals now publish commentary-type articles, which are also counted in the numerator. Amazingly, the calculation also includes citations to retracted papers, not to mention articles containing falsified data (not yet retracted) that continue to be cited. The denominator, on the other hand, includes only primary research papers and reviews.
At some point, to accurately assess a scientist’s body of work, you have to know the field. It can’t be reduced to numbers.