Journal Impact Factor versus Journal Authority Factory. Table from "A Glaring Paradox" by Mark Johnston doi: 10.1534/genetics.115.174771 Genetics March 1, 2015 vol. 199 no. 3 637-638. Reprinted by permission of GSA.

Journal Impact Factor versus Journal Authority Factory. Table from “A Glaring Paradox” by Mark Johnston doi: 10.1534/genetics.115.174771
Genetics March 1, 2015 vol. 199 no. 3 637-638. Reprinted by permission of GSA.

We all do it. We wrestle with an experiment for months until finally it works. The data are excellent and the findings novel. We drive our collaborators half mad with revisions and supplements to the many drafts. Now where do we send our precious paper? Nothing but the best will do, so it’s off to a top-tier journal, one with a big impact factor and often a single-word title.

Alas, all too many of us in the scientific community are forced to decide early in our careers and continue to believe thereafter that in order to get that NIH grant or to land that Ivy League job (or any job, these days), we need a pedigree, a citation we can brandish, hopefully one from that very small set of journals with short titles, broad focus, and clout. So we send our best work to the big journals and hope for the best.

This is crazy and a new analysis by the Editor-in-Chief of Genetics shows why. I should say first that I have long been frustrated by the distortions of the current scientific publishing system, a toxic situation embodied at its worst by the so-called Journal Impact Factor (JIF) to which I and my fellow 12,376 scientists and scholars who have signed the Declaration on Research Assessment (DORA) statement so far are strongly opposed.

But Mark Johnston, who is the top editor at Genetics, has just given me a new view of the whole mess in his illuminating editorial entitled, “A Glaring Paradox.” Johnston makes ingenious use of the JIF, which purports to measure a journal’s impact and an alternative metric called the h-index, which claims to evaluate an individual scientist’s prominence. Using these metrics, Johnston looks at biology journal editors themselves. In this new analysis, Johnston finds that the correlation is inverse; the higher the h-index of the editors, the lower the JIF of the journals. This inverse relationship between “impact” (JIF) and “experience” (hindex) would be amusing if it were not so alarming.

As a group, the professionally trained scientists who make decisions on biology papers at the big journals with the big JIFs have significantly less scientific experience and far weaker publication records than the editors of lower JIF biology journals, which are run by practicing scientists for professional societies such as ASCB. While low on personal scientific impact, these professional editors at big journals have tremendous power over what appears in their journals, Johnston explains. They recruit peer reviewers, synthesize (and sometimes skew) reviewer opinions, and actively steer the decision process. Thus, says Johnston, we send our best papers to journals whose editors have relatively thin personal track records in scholarly publishing. Clearly there is value added by publishing in these journals and the professional editors who work there are fully trained as scientists, no question. Big journals and big journal editors bring much to the overall process. But the worrisome part in the publishing culture is that bench scientists ignore their own community’s respected journals in hopes of publication in a narrow group of branded journals. By doing so they legitimize and further skew and widen that chasm between impact and experience present in those publishing venues, as described by Johnston.

Along this line, imagine a world where patients flock for open-heart surgery to lightly experienced surgeons while surgeons of note and proven ability are left to deal with umbilical hernias and appendectomies. Clearly the free market or common logic is not working in scientific publishing.

I repeat that I am not a fan of any standalone statistical “impact” metric, including the h-index and, even less so, the JIF. Nor is Johnston, who signals his own skepticism when he describes the JIF as “widely discredited yet surprisingly influential.” (For some interesting reaction to Johnston’s editorial, see his follow-up blog.) But his original analysis is clear and passes the face validity test.

Society journals like ASCB’s own MBoC or Johnston’s Genetics (which is published by GSA, the Genetics Society of America) come out smelling like roses in this new analysis because we build outstanding boards of editors by calling on working scientists who have the experience and peer-reviewed accomplishments to judge new work. Yet I know that many scientists (including many in ASCB and GSA) reach for the JIF “stars” when sending out their best work because they feel that their careers are on the line. This is no secret. Ask graduate students, postdocs, or faculty, young and old, why they do not send their best work to MBoC or sister society journals and you will hear that they cannot afford the risk of not being seen or appreciated.

Alas, this is true. We all need to be noticed but, as scientists, we are to blame for this. For too long, we have worshipped the imprint of a small set of journals. Established scientists and young investigators, we queue up for months outside the Big Journals, all in hopes of a trophy citation that will transform our scientific lives and our personal lives as well. We all have families to please and bills to pay.

From personal experience, I understand the view from the lab where the best career guarantee these days comes from publishing in a one-word title journal. You have to play by the rules, especially in science. But now that I have crossed into science policy and am running a prestigious professional society like ASCB, I can see that the best prospects of the research science that I am pledged to nurture involve changing bad rules, both written and unwritten. This is why, at ASCB, we are fighting so hard against the misuse of the JIF and to maintain journals that reflect the peer judgments of practicing scientists.

I am very proud of MBoC, which in the Johnston analysis comes out glowing with its stellar board of editors and our all-star Editor-in-Chief David Drubin from the University of California, Berkeley. (I must lay my cards on the table here. ASCB and GSA are partners in publishing the online, open access education journal CBE—Life Sciences Education.) As the Executive Director of ASCB, I am the publisher of MBoC and I am very proud of what it does for cell biology. But I am frustrated by scientists in virtually all fields who do not see how damaging the current publishing system is to research progress. Our best interests as scientists are in our own hands. That uncomfortable paradox is also our best hope for reform.

Share on FacebookTweet about this on TwitterShare on Google+Share on LinkedInPin on PinterestEmail this to someone
Stefano Bertuzzi

Dr. Stefano Bertuzzi is the Executive Director of the American Society for Cell Biology. In this position he is responsible, with the ASCB Board, for strategic planning and all operations at the Society to serve the needs of its ~9,000 members and to promote the field of cellular biology and basic science. Email: sbertuzzi@ascb.org


Comments are closed for this post.