A President’s final address typically looks back with a satisfied smile on promises delivered, new territories acquired, and foes vanquished and looks forward to the estimable qualities of their successor. I do have a spectacular successor, Eva Nogales, a wonderful cell biologist, and fantastic mentor and colleague, but the other items are largely missing: I didn’t make any promises beyond “promoting basic scientific research in the United States and internationally, science education, and the public understanding of science,” I’ve done little beyond writing columns to make any of them come true, we’ve acquired no new territories, and it’s not clear that we’ve vanquished any foes, although we have tried hard to stand up for foreign and foreign-born scientists in the United States and the concept that science is truly a global village.
Like many elected officeholders, my main concern over the last year, scientific publishing, was thrust upon me. Like almost every professional scientific society, ASCB publishes journals, Molecular Biology of the Cell (MBoC) and CBE—Life Sciences Education (LSE), and they publish exciting papers in cell biology and life science education, often written by our members. MBoC also makes a contribution to ASCB’s budget from a combination of article publication charges, paid by authors, and subscriptions, primarily paid by institutional libraries.
The “profit” ASCB makes on MBOC is the difference between what it costs to publish the journal and the income we derived from the journal. Revenue from MBoC is 16% of the Society’s total revenue. Like many other professional societies, we use that money, in combination with income from membership, our annual meeting, grants, and philanthropy, to support activities that benefit our members and the scientific community as a whole. These include professional development activities; support for scientific meetings; travel grants; being one of the principal supporters of the Declaration on Research Assessment (DORA); maintaining a fulltime staff member for public policy, Kevin Wilson, who is our connection to Capitol Hill and the U.S. government; and publishing LSE, which despite generous support from HHMI and the Genetics Society of America is not self-sustaining. For at least 50 years, those who ran professional societies and anyone who looked at their financial statements have known and been comfortable knowing that the societies’ journals made money that was used for the greater good of their members and science as a whole.
The Revolution in Publishing
That income and the activities it pays for are now threatened. We are in the midst of a revolution in publishing that raises profound questions about the future of journals, scientific societies, and scientific publishing itself. As revolutions usually do, this one has a number of causes. The first is technological. From their invention, journals depended on a revolutionary invention of the 16th century: the printing press. Authors wrote and later typed papers, and sent them to editors to be judged and edited before they were printed, bound, and sent out to subscribers. The Internet has abolished printing as a way of distributing papers and we can dream of a central electronic repository that contains the entire scientific literature in a form that can be both searched and analyzed to follow the development and spread of ideas and techniques.
The second cause is sociological: We have come to confuse the impact of journals with the impact of individual papers, something that DORA, an initiative to make better and more practical ways to assess the impact of research and researchers, is trying to tackle. The exponential growth of biology means that none of us can read every paper in our fields and faced with a mountain of literature many of us have retreated to knowing the impact factor of a journal rather than reading the papers published there. The impact factor is the number of times that papers a journal published over the last two years were cited, divided by the number of “citable” papers the journal published. The impact factor is an aggregate statistic about a journal and can be misleading: review articles tend to get cited more and faster and thus contribute disproportionally to a statistic that looks at the first two years of a paper’s long lifetime. The tyranny of the impact factor was put most bluntly by a colleague of mine who said, “We get 300 job applications, and using the number of papers they published in three journals (Nature, Science, and Cell) to select five people to interview may not be fair or perfect, but it’s fast, and we always get at least one spectacular candidate.”
The third reason is political: the argument that the scientific literature should be available to anyone, free of charge or copyrights held by commercial entities. The argument comes in two flavors: Large commercial publishers should not be making enormous profits by forcing their institutional subscribers to buy complete sets of journals, and the scientific literature should not be hidden behind paywalls, whether these belong to for-profit or nonprofit organizations. Just so you’re informed, you can be open access and for-profit by some combination of high publication charges, high volume, and minimal reviewing, and society journals, as I mentioned above, do make a “profit” although they use it for nonprofit causes. I believe that there’s a difference between profit and “profit,” but it’s a subtle distinction that’s easy to lose sight of.
The fourth reason is academic: the concern that the reviewing process slows down the public appearance of scientific research, with delays of a year or more from submission to publication being common for high-profile journals.
How to Reform Publishing
The combination of these factors has led to a number of changes, experiments, and proposals to reform or revolutionize scientific publishing. These include the creation of eLife, a high-profile, open-access journal that was launched by three scientific funders (HHMI, the Wellcome Trust, and the Max Planck Society); bioRxiv, an online archive, funded by the Chan-Zuckerberg Initiative, where authors can post their manuscripts before publication and even before submission; Review Commons, an initiative from ASAPbio and EMBO Press that allows authors to get reviews before submitting their papers to one of a group of journals (including MBoC); Plan S, the proposal from a group of European funders to allow their grantees to publish only in open access journals that hold publication costs below some threshold; the Faculty of 1000, a group of 11,000 faculty who read and judge the literature; and a proposal from HHMI for a large-scale platform that would host papers after review with journals providing commentary on the papers hosted on the platform rather than publishing the papers.
The two most radical reforms that have been proposed are to eliminate journals, at least as the primary publishers of the scientific literature, and to decide, after an initial editorial review, whether to publish a paper before soliciting detailed comments from reviewers, thus giving authors the choice about which criticisms they respond to, including offering the ultimate rebuttal: publishing the paper in its originally submitted form despite substantial and consistent objections from multiple reviewers! This early decision has been tried as an experiment at eLife and led to the conclusion that you passed the bar more easily if you were older and better established.
Given the ability of scientists to hold strong opinions, you won’t be at all surprised to hear that there are many conflicting opinions about how the revolution should unfold. Here are two of mine, both personal, and both of which have been strongly disagreed with by people I respect. The first and most important is Read the Freaking Paper (RTFP). If we all really believed in RTFP rather than Look Where the Paper Was Published, much of the sociological problem would go away and students and postdocs could worry more about what was going to be in their paper instead of where it would get published.
As a first step toward worrying about the science, rather than where it was published, we could require that every search and selection committee start with access to three things: a one-page statement from the candidate about themselves and their work; the title and abstracts of up to three papers, all without naming the journals, and indicating only which position the candidate’s name was on out of how many authors; and a coded identifier rather than a name for each candidate. The people on the committees could of course find the papers on the Web, but they would need to undertake to read and think about the abstracts first.
While we were at it, we could also anonymize the reference letters, suppressing the gender and identity of both writer and subject. In orchestras, the gender bias in the hiring of musicians declined dramatically after they started auditioning with a sheet between them and the listeners. We may not be able to hold job seminars and chalk talks behind a sheet, but we can try and remove some of the judgments associated with gender, ethnicity, and place of publication at the critical initial stage of searches and selections. DORA plays an active role in trying to make improvements in this area, publishing a blog about best practices and gathering scientists to talk about them at ASCB’s annual meeting.
My second opinion is about deciding to publish papers before engaging in detailed review. If we’re going to do this, and there are arguments both for and against the idea, I think the reader needs a summary of where the authors and reviewers ended up after their discussions. My suggestion would be that the paper would come out with an editorial summary, which would appear in the journal and on PubMed, that reflected the level of agreement between reviewers and authors: if there was substantial agreement, the statement would appear in green with an editorial assessment of what was important about the paper; if there was a mixture of agreement and disagreement, the statement would be in orange, with a brief summary of the unresolved issues; and if the authors refused to budge in the face of reasonable criticism, the statement would be in red. I’ve been reminded that there are occasions when the authors were proved, in retrospect, to be spectacularly right and the reviewers, equally spectacularly, wrong. But with a living assessment, the editorial statement could change color to reflect the authors’ victory and the same people who now poke fun at their reviewers as they give seminars (thus exhibiting their large and frail egos!) could do it in Technicolor.
A much-debated question is how we can find a better value system than using the names of journals as a guide to which papers to read and a proxy for the quality of all the papers we haven’t read. There are many possibilities. The Faculty of 1000 presents papers its members recommend. Curatorial journals could identify the 50 platinum, 100 gold, 200 silver, and 500 bronze papers published in cell biology every year. Journals, including society journals, could add editorial sections in which their staff commented on papers that they had published or found elsewhere.
Finally, we could go full democratic Internet and run a service in which individuals added their thoughts and their scores were aggregated, just as we do for books and restaurants. Here, I’m schizophrenic. One part of me says stop whining and RTFP, which is what I did to write the introduction of my thesis by immersing myself in the bowels of Countway Library and simply going on a long paperchase from reference to reference, almost exclusively outside the boundaries of the Big Three. The other remembers that in grad school, before the Internet, I lived in a cheap enough apartment and ate cheap enough food that I had my own subscription to Cell and Nature, I read papers every morning for an hour, and I still subsist, more than you’d think, on what I read then. What no one can know is whether I’d be any worse off if I’d been reading random papers in cell and molecular biology than what I then supposed was the crème de la crème.
ASCB Welcomes You to the Revolution
So what is ASCB doing amidst all this revolutionary fervor? The answer, of course, is being revolutionary. At our December 2018 Council Meeting, we voted that we should make MBoC open access to join LSE in being freely accessible. MBoC’s new editor-in-chief, Matthew Welch, is excited about performing experiments in scientific publishing. The one we have begun is participating in Review Commons, which I mentioned above, and those we have discussed include 1) making editorial decisions for publication prior to review, 2) publishing papers with an editorial summary of the discussion between authors and reviewers and an assessment of how much a paper either confirms or contradicts previously published work, 3) commissioning peer reviews, with author’s approval, of papers posted on bioRxiv, even if these papers have already been submitted elsewhere, and 4) participating in new platforms for scientific publication, such as Libero or next-generation bioRxiv. Matt’s overall goal, which I strongly support, is to perform experiments, participate in guiding the publishing revolution, and make a journal that is more accessible and more exciting to authors and readers, while maintaining our commitment to publishing rigorously conceived, executed, and interpreted science.
About the Author:
Andrew Murray is the 2019 ASCB President. He is the Herchel Smith Professor of Molecular Genetics, Howard Hughes Medical Institute Professor, and Director of the NSF/Simons Center for the Mathematical and Statistical Analysis of Biology at Harvard University.