The ultimate benefit of preprints (scientific manuscripts posted online before the completion of peer review) is the acceleration of discovery by making work available to scientific colleagues without delay. But there are many ways preprints benefit individual authors, too. One of the most immediate is the opportunity to receive additional feedback on manuscripts. How do we ensure that this feedback is constructive, rather than destructive?

Preprints enable early and broad feedback

Traditionally, authors might circulate a draft of a completed manuscript around to trusted collaborators or colleagues within their department, allowing them to get feedback that will help them to improve the paper. But the pool of potential contributors to this feedback is typically limited. Manuscripts shared in this way are assumed to be confidential; authors may fear being scooped by unscrupulous researchers who could plausibly deny ever seeing the work.

By contrast, preprints provide authors with the security of a date-stamped, permanent, publicly accessible record—and thus they allow anyone in the world to participate in the process of providing feedback. This allows you to get feedback from a broader group of people you might not otherwise hear from, strengthen your manuscript, and even find a coauthor.

In fact, one of the Altmetric Top 100 papers of last year was a contentious article on bioRxiv about cell phones and cancer. Not only did it spark lots of media attention, it also generated thoughtful discussion (141 comments on the first version) that put the reported findings into context.

It’s Open Access Week! This year’s theme is “Open in order to”—a call to see open access not as an end unto itself, but rather as a means to other positive outcomes.

Is public feedback a good thing?

Despite some high-profile examples, only ~10% of bioRxiv preprints have public comments directly on the preprint. It’s possible that much more feedback comes to researchers privately, via email. I once received very helpful, constructive feedback via email on a preprint, prefaced with a note saying “I’d post this publicly, but don’t want to seem critical.” For better or worse, we have professional standards around politeness that probably push some of the most useful feedback underground.

Is this a bad thing? Private feedback will reach the authors nonetheless. In addition, there might be good reasons to keep preprint comments private. Last year, the venerable physics preprint server arXiv ran a survey to assess interest in adding—among other things—public commenting features. Some arXiv users objected, including Izabella Laba from the University of British Columbia:

“Internet comment sections … have …. become a racist, sexist bog of eternal stench from which any reasonable person is best advised to stay away.”

“Women, in particular, get far too many comments questioning our competence, implying that we might not know the basic literature, that we might not really understand our own results, that said results might turn out to be false or trivial if only someone qualified had a look, or some such. We’re also subject to gendered standards of “professionalism” that do not allow us to respond in kind and give as good as we get. But if you tell me that men, too, can get inane, confused, or malicious comments–why, yes, I agree. More reason to refrain from making the arXiv more like YouTube.”

Such malicious comments could potentially have serious consequences. Some researchers may fear that public comments on their preprint might adversely impact their chances of publishing the work in their chosen journal. Indeed, FASEB’s preprint policy explicitly states that public feedback on preprints may be taken into consideration when evaluating a manuscript.

While I understand the concern (particularly if the criticism is personal or otherwise unfair), isn’t the early detection of legitimate flaws a good thing for science overall? Furthermore, a civil public dialog about preprints could model constructive scientific dialog and help other researchers thinking about going down similar experimental paths. More broadly, developing a culture that includes more active scientific debate could help identify, resolve, and prevent sources of irreproducibility.

If we want to reap these benefits of public commenting without entering a “bog of eternal stench,” we need to consider how to moderate those comments.

Anonymity—a red herring?

A key decision in this area is whether to allow feedback to be posted anonymously. One only needs to compare PubPeer and PubMed Commons to see how signing comments—or leaving them anonymous—can affect the nature of the conversation. PubPeer, which allows anonymous commenting, fosters lively and sometimes hostile arguments. PubMed Commons, which restricts posting to PubMed authors, is much more restrained, in both tone and volume of traffic.

Comments on bioRxiv work through the commenting platform Disqus via a Google or social media login. Is signing the comments too high of an energy barrier for legitimate concerns to be aired? Earlier this year, prominent authors posted a preprint to bioRxiv without a methods section. Despite some whispers on Twitter, no one left a comment on bioRxiv directly until, months after the preprint originally appeared, an anonymous group called “Preprint Now” pointed out the omission in a polite comment. The authors immediately apologized and uploaded a new, complete version of the manuscript.

If this group hadn’t been allowed to post under a pseudonym, would this comment have been made at all? And how important is identity of the commenter so long as the voiced concerns are sound? After all, the majority of our peer-review system functions under reviewer anonymity. At least in theory, journal editors serve a moderating and filtering role, ensuring that the tone of the reports is constructive. But preprint servers don’t (and perhaps shouldn’t) operate with this level of attention to every manuscript. Perhaps the answer lies in distributed moderation, which to some extent is already implemented at bioRxiv: Disqus allows users to vote comments up and down. But new initiatives on the horizon will take it to a whole new level.

Journal clubs and community moderation

ASAPbio Ambassador Prachee Avasthi at the University of Kansas Medical Center ran a journal club class for graduate students called “Analysis of Scientific Papers;” papers were selected exclusively from preprint servers. She’s shared her syllabus and introductory slide deck, and the students’ reviews can be found on the Winnower.

To help the concept of preprint journal clubs spread, two other ASAPbio Ambassadors, Sam Hindle and Daniela Saderi, have launched a platform called PREreview, described in detail in this eLife blog post. They’ve conducted a survey to understand attitudes toward preprint journal clubs and created resources to help you get started. Other platforms exist, too: see Academic Karma and Peer Community In.

Even the act of commenting on preprints will be much more immersive in the future. bioRxiv and Hypothes.is have announced a partnership that will allow for web annotations to appear anywhere in the text of preprints—helping to organize and focus the discussion. This platform will also allow anyone to create their own “layer”—meaning that the privacy and moderation of comments within that can be controlled by individual groups of scientists.

Commenting on interim research products offers opportunities to add constructive layers to our existing peer-review process. Now it’s up to groups of scientists—departments, classes, labs, and societies—to decide what role they’ll play in it.

The views and opinions expressed in this blog are the views of the author(s) and do not represent the official policy or position of ASCB.

 

 

Jessica Polka

Jessica Polka is director of ASAPbio, a biologist-driven nonprofit working to improve life sciences communication. She is also a visiting scholar at the Whitehead Institute and a member of ASCB's public policy committee.