As Journal Impact Factor Loses Respectability, Can Altmetrics Provide Other Measures?

It’s been nearly a year since an insurgent group of scientists and journal editors gathered in San Francisco at the ASCB Annual Meeting to plot a counterattack on the outsized influence of “impact factor” on scientific assessment. A metric invented in the 1960s by Eugene Garfield to help academic librarians subscribe to influential journals, the journal impact factor had become a misleading measure, the insurgents agreed. In June, they issued the Declaration on Research Assessment (DORA), asking the world science community to sign on, endorsing 18 recommendations for new standards of research assessment that would move away from journal-based metrics to individual assessment. The number of DORA signatories is now approaching 10,000. Evidently, scientists are ready for something better than the impact factor, but what?

As the impact factor loses favor, alternative metrics or “altmetrics” have been popping up. Many involve refinements and new methods for gauging the impact of papers. Other altmetrics track nontraditional measures such as article views online, discussions in the news and social media, “saves” on bookmarking apps like Mendeley, and other citation databases. Heather Piwowar, an informatics scientist and entrepreneur, is a prominent advocate for these new kinds of altmetrics.

Piwowar recently finished a postdoc at Duke on the impact of data sharing and just co-founded ImpactStory, a site designed to help researchers determine their impact. She believes altmetrics give a more well-rounded view of a scientist’s achievements, beyond those that impact his or her peers. “Science has made an impact in a lot of different ways, not just on other scholars, but on the public, or on practitioners, or on teaching environments. Altmetrics help us measure those,” Piwowar said. And unlike impact factor, altmetrics provide more than one measure of impact. “A huge strength of altmetrics is that it’s lots of kinds of engagement by lots of different audiences,” she explained.

Sites like ImpactStory and Altmetric aggregate many different metrics from around the web. ImpactStory displays this information on a scientist profile that shows where and when a scientist’s articles, data, slides, and software have been viewed, cited, recommended, or discussed. Altmetric also collects discussions from Twitter, Facebook, science blogs, and news outlets as well as other sources, and makes them available when you visit an article using their “Altmetric it!” widget. Additionally, Altmetric puts all of that data together in a single Altmetric score using a transparent scoring algorithm.

Journals themselves are getting into the altmetrics business. The open access publishers of PLOS and eLife offer “article level metrics” that give data, like social shares and PDF downloads, for every article. Sandra Schmid, a Professor at University of Texas Southwestern Medical Center, commends the experimentation. In a recent Science Careers webinar, Schmid said, “More journals should be doing what PLOS does, provide not only the number of citations, but the number of hits and reads on a paper… that’s something important for early career scientists.” Schmid explained that article citations can take a while to accrue, which skews against the publications of young scientists. This is why postdocs and early career scientists need to draw on diverse metrics.

Piwowar believes that new metrics could speed up the process of science and promote innovation in publishing. If scientists aren’t worried about publishing in high-impact journals, which tend to have lengthy review processes and require mounds of data, they might be more likely to try out a new journal with a new idea for peer review, she contends.

Altmetrics can also measure impact of more than just publications. “Getting metrics for data and software are important too,” Piwowar said. Altmetrics can also track presentation downloads on SlideShare. “We’ve been really surprised how popular metrics for slides are because it helps people make the case that their presentations are making a difference,” Piwowar said.

Asked to identify an especially novel metric, Piwowar selected Mendeley, a scholarly bookmarking app that organizes users’ PDFs and stores them on a cloud. “Mendeley Readers is a great one because most papers have been added to at least someone’s library. It’s not true that all papers have been cited… or tweeted,” Piwowar said. “However, Mendeley is still mostly academics, so it’s not a diverse audience,” she cautioned.

But altmetrics aren’t perfect, Piwowar contends. They can’t, for example, distinguish between positive and negative engagement such as criticism on Twitter. However, studies show there is a positive correlation between social media mentions, regardless of the content of those mentions, and later citations. “I think most people who engage in these online tools [that generate altmetrics] would agree that it’s not noise… but more research is needed to know what it is that [altmetrics] are doing… what do they mean and what don’t they mean?” Piwowar said.

Some critics say that altmetrics will encourage scientists to waste time writing blogs or tweeting, when they should be at the bench or at the computer wrestling data into publishable papers. But Piwowar believes that “impact” is impact. “I think it’s good to reward scientists who try hard to have their science make an impact.” Plus, it’s important for the public to be aware of the importance of research in order for the government to continue to support it.

Many altmetrics don’t determine the quality of a paper. Rather, these “altmetrics reward science that’s made a difference and that’s different than science that’s of high quality,” Piwowar said. But sites like PubPeer, which provides post-publication peer discussion on journal websites or F1000 Prime, which has a team of faculty experts writing short blurbs about papers they find interesting, are a means to qualitatively address a paper’s value and significance.

Schmid at UT Southwestern Medical Center has been outspoken on the need for qualitative altmetrics. “It always needs to come to a qualitative measure,” Schmid explained in an essay in Science Careers. As chair of the Department of Cell Biology, Schmid has changed the way they evaluate new faculty for hiring. Instead of reviewing CVs, where it’s tempting to just glance at the names of the journals where a candidate published, Schmid asks applicants to write essays explaining the significance and scope of the candidate’s research without mentioning journals. She hopes that this process will “identify future colleagues who might otherwise have failed to pass through the singular artificial CV filter of high-impact journals, awards, and pedigree.”

Explaining her department’s new policy on the Science webinar, Schmid turned to a baseball metaphor, saying a base hit is a base hit, whether it is hit in Denver or Yankee Stadium. The impact factor turns that on its head, scoring location over actual impact. “And that ridiculous. It’s still a base hit… the JIF [journal impact factor] is simply not a useful tool.”

About the Author:


Christina Szalinski is a science writer with a PhD in Cell Biology from the University of Pittsburgh.