Imminent Impact: How important is this index?

Journal impact factor is not an accurate measure of individual article significance and quality. Photo by Sean MacEntee.

Journal impact factor is not an accurate measure of individual article significance and quality. Photo by Sean MacEntee.

In the field of publishing in science, impact factor continues to play a major role in determining the quality of a journal. The impact factor of a journal is calculated as the total number of citations received by that journal from other journals to the number of articles published by the journal concerned, in the previous two years. Although this metric was originally established by Eugene Garfield (Thomson Reuters Web of Science) as an easy method to measure the quality and impact of a journal and compare it with competing journals, it is now widely used to judge the merit of articles published regardless of the impact of their scientific content. A question that most researchers have asked themselves very often in their careers is where to submit their articles, especially since they are judged based on this metric. Indeed impact factor is a major determinant in this decision, which influences the careers of many scientists. The question that this provokes is whether this is a good method of selecting a journal to publish in? And if it isn’t, why do scientists still rely on this value to make decisions regarding their publications?

 

In recent years the number of citations of a journal has become almost synonymous with the impact factor, leading journals to choose their subject matter stringently and focus more on articles pertaining to topics in demand. This ensures that the articles will get cited and maintain the impact factor of the journal in question. However this skews the scientific topics that are considered “high impact” simply because of the journal they are published in. This is especially true as the actual impact factor is an aggregate of all the citations received by a journal and in fact may only represent a portion of the total articles published. But the easy availability of this value and the continued use of this factor, not only in publishing but also recruitment of candidates into scientific appointments, reinforces its importance even if it is unbalanced. It is easy to screen candidates by the impact factor of the journals they have published in, even though this factor does not actually reflect the impact, quality, or the number of citations of the individual article. Since this is a widely known and accepted fact in the scientific community, scientists will try to publish their findings in high impact factor journals to further their careers.

 

Let us examine why an impact factor is not an accurate and sensitive enough metric for judging a scientist’s academic merit. Firstly, scientific citations are hugely variable even for different publications from a single author, and the distribution may vaguely resemble a normal (Gaussian) curve. Therefore judging the actual content of the science and the quality of the techniques may provide better insight into scientific merit. Secondly, what are the causes of low citations that may affect either a journal or an article? Usually it is because they are in fields that are off the beaten path and hence less cited but not necessarily less important or they are topics with a slow citation growth rate, or they are not visionary or “hot” enough. But all of the above fails to take into account that those articles may still contain solid and useful data and convey information that is vital for further studies. Thirdly, newer journals cannot be given a full impact factor until their third year is complete. Although partial impact factors are being quoted now, they are not a full measure of the journal’s merit. However, these new journals encourage exciting and ground-breaking science but cannot be evaluated by the traditional methods.

 

In fact, even though this value is accepted as a crude indicator of a journal’s impact on a field, it can be argued that it does not even serve that minimal purpose. This is due to the fact that publishers can negotiate with Thomson Reuters as to how it is calculated. These negotiations are rarely available to the public but represent a significant variable, which should be taken into account when judging a journal’s impact, especially if the impact factor always increases after negotiations as it is wont to do.

 

So what should be our recourse? Judging every journal and article by impact factor is deeply ingrained into scientists. Furthermore, a concerning fact is that although most scientists quote and swear by impact factors, not all know what it truly depicts or how it is calculated and that it is not a good way of judging science. Education about the true nature of this crude metric would help to dispel the myth that a high impact factor publication is every aspiring tenure track professor’s ticket to success. It would also assist the scientific community to better appreciate the science contained in individual articles and judge them critically. The ASCB’s Declaration on Research Assessment (DORA) presents some guidelines to this effect, which nearly 13,000 individuals have signed and agreed to follow. As all articles are published online and are easily accessible, citations of the articles taking into account the field of publication should be the true measure of a scientific project.

 

We as scientists can help to promote and spread this knowledge that will remove the skewness in articles considered high impact. We should cease to judge an article by the name of the journal and instead ascertain the contribution of the science to the field of study. It may also help if sub-sections of papers with a broader range of topics can be cited individually, since that would assess the impact of specific results to the field that may not be apparent from the overall work. Moreover we need to encourage the fair evaluation of any candidate’s work through their personal presentations or techniques used instead of the journals they publish in. Although impact factors have shaped scientists’ opinions and decisions for close to half a century, it is time we stop assessing “impact” and start evaluating what is good and rigorous science.

About the Author:


Arunika is a post-doctoral researcher in the labs of Drs. Michael Lampson and Ben Black at the University of Pennsylvania. She is working on the mechanism of centromere inheritance and maintenance in the mammalian germline.