The current world’s record in the 100-meter dash, held by the Jamaican runner Usain Bolt, is 9.58 seconds. Actually Bolt ran it in 9.572 seconds but the rules require rounding up to two decimal places. And yet in the 100-meter dash, a third decimal place can make a big difference at the finish line between gold or silver or bronze.
Scientists love precision so, of course, we love decimal places (and gold medals). But did you know that the JIF—the notorious journal impact factor—is computed to the third decimal place? Does a thousandth of a JIF point make a significant difference in assessing a scientific paper, a career, or a research proposal? We’ll have a closer look at this curious precision in a moment but I want to explain why numbers and the JIF have been much on my mind this week.
On Saturday, DORA turned two. It was on May 16, 2013, that the San Francisco Declaration on Research Assessment or DORA went live. A group of publishers, scientists, editors, and other stakeholders in the scientific research enterprise nailed up the DORA declaration, calling on scientists and scholars worldwide to commit themselves to stopping the use and misuse of the JIF as a metric for assessing individual research. In the two years since, ASCB has been intensely proud of our role in DORA. The original group of a dozen scientific “insurgents” who morphed into the DORA coalition first convened at the 2012 ASCB Annual Meeting in San Francisco. The DORA declaration in May 2013 was the result of months of careful discussion and drafting, but it still bears the stamp of its origins—it is the San Francisco DORA.
DORA’s second birthday has made me think about numbers and the JIF. I just looked at the DORA site and, as of this writing, 12,337 individuals and 572 organizations have signed. This is nothing to sneeze at. Before DORA, scientists with such qualms perhaps felt powerless. As Alexis de Tocqueville wrote in his Democracy in America, individual freedom is powerless if individuals cannot form opinion groups and associations to push forward a common cause. DORA has empowered many scientists to speak up and put forward reasons why it makes no sense to continue using the JIF to evaluate single scientific articles, let alone individual researchers.
Through DORA, we know there are 12,337 of us (and I know for sure that there are thousands more) who see JIF for what it is—a branding device that has become an out-of-control destructive force in world science. It warps our collective judgment, forces individuals to waste time and finite resources on the so-called impact factor ladder, and spreads its noxious effect throughout the global research community.
Yet the deeper you dig into the numbers behind the JIF, the more you realize that we, in the global scientific community, are self-made victims of a colossal misunderstanding. The JIF was developed as a tool to help librarians buy journal subscriptions. It was intended as a journal level metric, not an article level metric. And this is key, because the citation distribution is heavily skewed, with about 20% of articles accounting for 80% of the citations. This is true for pretty much every journal, large or small, famous or obscure. Then why not at least use medians instead of means in ranking journals? That would be more appropriate in the presence of a heavily skewed distribution and significant outliers. The result would be to compress the indicator for journals and clump most journals into similar buckets. The apparent differences among journals would begin to diminish, and the true futility of using the JIF in assessing research results would soon be exposed.
Nothing makes that clearer than a closer look at the JIF’s overly nice exactitude. The JIF currently comes calculated out to three decimal points, conveying a false sense of precision. So, what would the JIF rankings look like if you took away the third decimal place? Or even the second?
The figure shows a simple analysis I have conducted looking at what would happen if we prune this holy metric of some of its decimal places. The top figure shows the number of ties in JIF scoring to the third decimal place, middle to the second, and bottom to one decimal place. Note that the y-scale changes. As you can see, the vast majority of journals quickly fall into the same bracket and the apparent differences among journals are much diminished.
Of course scientists know from the beginning of their careers that small differences can be significant. They also know that small differences can be noise. Knowing which is which is what makes good science. Different journals serve different purposes. Like any other form of communication, scientific communication needs to match content to its best audience. Pretending that the JIF, a ranking based on a third decimal point, says anything significant about a journal is self-delusion. Exaggerating small differences to fit one (misleading and misused) metric seems a perfect definition of nonsense.
The point here—and the greater point behind DORA—is not that the JIF can be fixed. The JIF needs to be ignored in scientific culture. We have, in large part, done this to ourselves but this is our community. JIF-worship is a fashion that needs to go out of style quickly. The JIF needs to become a disreputable topic at scientific conferences, faculty meetings, journal clubs, and most of all study sections. (Science librarians are welcome to keep using the JIF for what it was intended—planning subscription renewals.)
Meantime, if you’re not one of the 12,337 scientists and scholars who have already signed DORA, you need to get to the site to get details on the JIF and on best practices to diminish its immediate impact. If you agree, stand with us. Bring up DORA at your next lab meeting. Agitate for DORA on your website or on Twitter (use hashtag #sfDORA and follow @DORAssessment). Singing Happy Birthday, DORA, is optional.