This is an amended version of an editorial to be published in Journal of Documentation.
Impact factors have been, for quite a few years now, the single metric most closely associated with the ‘quality’ of an academic journal, or similar dissemination mechanism. This simple, perhaps simplistic, measure has been receiving an increasing level of criticism recently, of which an interesting example is a study by George Lazano, Vincent Larivière, and Yves Gingras, published in Joural of the American Society for Information Science and Technology, showing that the proportion of highly cited papers coming from high-impact journals is steadily decreasing. A main reason for this seems to be the increasing tendency for readers to find articles through search engines, rather than through the tables of contents and indexes of individual journals, severing the close link between perceived quality of articles and of journals. This study has received a good deal of publicity; see, for example, the article by Dan Cossins in The Scientist newsletter.
There are a number of things to be said about this, from the perspective of a scholarly journal such as Journal of Documentation (which I have the pleasure of editing). First and foremost, we have always be wary about choosing a single metric as the measure of how well a journal is doing; there are other, and arguably equally or more relevant measures. One such, produced from the same dataset as the impact factor, is a journal’s ‘half-life’; a measure of the length of time for which its material remains useful and used. JDoc has always had a very long half-life, equal to that of the major review journals of the field, something in which we have taken great satisfaction.
There are also new metrics, appearing as scholarly communication becomes an increasingly digital business, often referred to under the label of altmetrics. The most obvious of these, though by no means the only one, is the number of downloads of articles. While by no means the same as an impact factor, this is an alternative, and arguably an equally, if not more, valid, way of assessing a journal’s ‘reach’ and influence.
The most dramatic possibility, of course, hinted at by many of these new developments, is that the academic journal itself will undergo far-reaching change, as the viability of an information dissemination system developed to be produced in a convenient print-on-paper package is tested in an information environment which is not merely digital, but increasingly interactive and decentralized. Volumes, issues and pages, essential concepts for print journals, cease to have much meaning in the digital environment, and possibilities for interaction vastly exceed older ideas of errata and letters to the editor. It may that the effect of these changes will turn out to be so major for the scholarly journal, that the issue of the impact factor will come to be seen as entirely insignificant.
What I’d like to know is whether authors are using Impact Factor less in making their decisions about where to publish.
I do not think there have been any proper studies of this yet, but anecdotal evidence suggests this may be so. For academic authors, much depends on the policies of their institutions and funders, many of whom still love impact factors