On Research Funding and the Power of Youth

presidentscolumn

Peter Walter, top; Tony Hyman, bottom left; and Arshad Desai

“A person who has not made his great contribution to science before the age of thirty will never do so.” While this quip of Albert Einstein’s certainly does not generally apply to cell biologists, it is well recognized that innovation in science and technology tends to be driven by youth. We only have to look at pioneers of Silicon Valley, physics at the turn of the 20th century, or the average age at which Nobel prize winners perform their ground-breaking work (Figure 1) to remind ourselves that any society that wishes to push true innovation needs to enable its young investigators.1,2,3,4 Tangibly, it needs to fund their research based on promise, and it needs to promote their paths to early independence.

Despite this fact, in the system of the National Institutes of Health (NIH) that funds the vast majority of research in cell biology in the United States, the age at which young investigators launch their independent research careers has been increasing over the past several decades. In the early 1980s, more than 20% of NIH R01 research–grant recipients were under 36 years of age. As of 2014, less than 2% were under 36 (Figure 2). The failure to fund the most outstanding young scientists and support innovative experimentation at an early career stage has diminished the creativity and innovation of the American research enterprise.

The obstacles confronting young investigators are complex and have been a persistent part of the research enterprise for decades. For example, the lengths of graduate and postdoc training have grown over the past three decades such that young scientists are often in their early to mid 30s before transitioning to a faculty position. Furthermore, the length of time between appointment to a faculty position and securing an NIH R01 grant has also grown: In 1980, it took less than one year on average, whereas in 2013 the average was five years.

These delays mean that the average early-stage investigator is close to 40 years old before receiving his or her first NIH R01, the standard indicator of career independence. Other factors affecting independence include, but are not limited to, the difficulty of obtaining preliminary data, the structure of peer review, and the rise of team-based approaches.6

These delays to full career independence have many adverse consequences. Importantly, the new, groundbreaking ideas of the next generation of researchers are strongly discouraged by the present system that requires grant-seeking rather than scientific experimentation. The long wait to become an independent researcher has also led many talented young scientists to leave the research workforce for a variety of career and family reasons and has discouraged many others from entering in the first place.

 

figure1

Figure 1. An age distribution for scientific genius. The ages at which individuals produced Nobel-prize winning insights over the 20th century. Figure modified with permission from Jones, Reedy, and Weinberg (reference 1).

Past and Current Methods of Funding Young Investigators
The advancing age at which a young investigator gains independence has been obvious to the NIH for several decades, motivating it to try several methods to promote and encourage scientific independence for early-stage investigators, defined as those within 10 years of their terminal degree, as well as for new investigators, anyone who has never received a major NIH grant.7

The NIH introduced the First Independent Research Support and Transition (R29) award in 1986. This grant was intended for young faculty members within five years of leaving their postdoctoral work, but the level of support of the grant was very modest relative to an R01. An evaluation of the program revealed that R29 awardees were less successful at applying for subsequent R01s than young faculty members who applied directly for R01s. The sense was that the restrictions imposed by the program hindered these young scientists in their most critical developmental years. For these reasons, the R29 was discontinued in 1998.

The Pathway to Independence (K99/R00) awards were introduced in 2006. The K99 phase of the award supports scientists in the last two years of a postdoc, and successfully transitioning to the R00 phase supports the first three years of an independent position. Nearly 90% of K99 awardees move into faculty positions and into the R00 phase of the program. Furthermore, nearly half of K99/R00 recipients go on to receive an R01. The NIH made over 400 K99 awards and nearly 570 R00 awards in 2015. The size of K99 awards varies by institute, while R00 awards have a maximum yearly support of $249,000.

The NIH Director’s Early Independence (DP5) awards were introduced in 2011. These awards are directed to scientists who move directly from graduate school to a faculty position, with a maximum level of support of $250,000 in each of five years. These awards have not been around long enough to determine how successful DP5 awardees are at receiving a subsequent R01. Furthermore, the program is quite small, providing no more than 20 awards per year, so this program will likely not have a significant effect on the average age at which young investigators achieve independence.

Figure 2. Percentage of NIH R01 investigators age 36 and younger (in blue) and age 66 and older (in red) in fiscal years 1980 to 2010. Reproduced from Rockey (reference 5).

Figure 2. Percentage of NIH R01 investigators age 36 and younger (in blue) and age 66 and older (in red) in fiscal years 1980 to 2010. Reproduced from Rockey (reference 5).

The NIH Director’s New Innovator (DP2) awards were introduced in 2007 for early-stage investigators, providing five years of support for a maximum of $300,000 per year. The primary requirement is that the investigator be “unusually creative” in his or her proposal, and no preliminary data or itemized budget is necessary. The NIH originally awarded fewer than 10 DP2 awards per year until the program expanded in 2012 to allow 40 to 60 awards each year.7 Due to the small size and youth of this program, no data are presently available concerning how DP2 awardees fare when applying for R01s.

Prior to 2007, success rates for new investigators’ R01 submissions were significantly lower than those of established investigators. In 2007, the NIH enacted a policy to equalize the success rates of new and established investigators. This policy essentially boosts the scores of new investigators so that this group is funded at a rate similar to that of established investigators. The results of this policy are mixed. The success rates for early-stage and non-early-stage new investigators has improved. The success rate for non-early-stage new investigators is very close to that of established investigators, while the success rate for early-stage investigators lags behind these other groups.

Despite these efforts from the NIH, the graying of the biomedical professoriate has not abated, suggesting that a stronger effort will be required to solve this serious problem. In fact, members of the U.S. Congress have begun debating legislative methods to find ways to fund more young scientists. The broad-stroke legislative mechanisms currently being suggested do not seem promising. The scientific community itself should determine how best to solve the complex and persistent problems confronting young investigators. Current policies are damaging U.S. science, with long-term consequences for American innovation. Other countries are investing in the future of their scientific workforce, and we need to do the same.


An ERC-like Mechanism to Fund Young Researchers in the United States
Recently, some ideas on funding young researchers have come out of Europe, with the establishment of the European Research Council. The ERC was launched in 2007 as the first

Figure 3. Types of ERC grants

Figure 3. Types of ERC grants

pan-European science-funding agency, and it funds investigators of all ages from all European Union–member countries and additional affiliate nations. Funding is provided in three major scientific domains: Physical Sciences and Engineering, Life Sciences, and Social Sciences and Humanities.4

Within each domain, the ERC provides three types of grants based on an investigator’s experience: ERC Starting Grants for those 2–7 years post PhD, ERC Consolidator Grants for those 7–12 years post PhD, and ERC Advanced Grants for established investigators (Figure 3).

In 2014, nearly 24% of ERC funding was set aside for ERC Starting Grants and another 32% was set aside for ERC Consolidator Grants. Thus, nearly 56% of all ERC funding was devoted to investigators within 12 years of receiving their terminal degree in 2014; the average age of these Starting Grant awardees is about 35 years.

Similar to competitive U.S. grants, all ERC grants are peer-reviewed and funding is awarded based on merit scores. Importantly, all grant applicants in a specific category—Starting, Consolidator, or Advanced—compete only against applicants from the same category. The crucial point is that young faculty members competing for ERC money do not have to compete with established investigators who have considerable track records, preliminary data, and likely a sizable lab. Rather, researchers in each career stage compete with each other. This allows funding decisions to be better steered toward the needs of the different stages. Starting grants can be funded mainly based on promise with little need for preliminary data, while Advanced grants are funded based both on the investigator’s track record and on the grant itself. A just-completed retrospective evaluation of completed ERC grants finds a remarkably positive outcome, with almost three-fourths of the grants being judged to have produced either a scientific breakthrough or some major advance.

The closest American comparison to an ERC Starting Grant is the NIH Director’s New Innovator (DP2) award, which funds innovative research by scientists who are within 10 years of their PhD or equivalent degree. Individual DP2 and ERC Starting Grants are roughly the same size (ERC: €300,000, or ~$340,000; NIH: $300,000 per year) and duration (five years). However, the overall sizes of the programs are dramatically different. In 2013, the NIH provided 51 DP2 awards, whereas the ERC awarded 300 Starting and 312 Consolidator awards.

Establishment of an ERC-like system in the United States could be used to focus on funding for young scientists.6 To construct this system, the NIH should work with the community to determine the proper size of this program with regard to how many young scientists should be funded by this mechanism. The NIH might expand the size of the DP2 program to roughly 500 awards per year. Expanding the program consistently over a 5-year period until it is similar in scope to the European system would allow the research enterprise to recalibrate its funding strategies accordingly. As young scientists compete for and win these new funds, the ability of young scientists to conduct innovative research and launch independent labs would be greatly improved. An ERC-like system in the United States that greatly expands the DP2 program would address the underlying problem in the research enterprise of underfunding young investigators and would also promote innovation and risk-taking in in experiments.


Comparing the Grant Review Process on Opposite Sides of the Atlantic
The evaluation of grants from young investigators requires a fine-tuned review system that can evaluate promise and integrate it effectively in the ranking of the applicants. We, together with many others, have been involved in assessing ERC applications to the LS3 (Cell & Developmental Biology) panel of the ERC for the past 6 years. One of us is also currently serving on an NIH study section and another has served on many past occasions. This experience has allowed us to compare review methodologies and to identify features of the ERC review process that are worth considering for implementation in NIH study sections.

The NIH study section has been compared to early-years American Idol, where a Simon Cowell-esque reviewer’s snarky comment guarantees the demise of a promising application.8 This reputation is unfair—study section members by and large work extremely hard to deliver a fair evaluation and study section meetings are respectful and well run (although there is on rare occasion behavior that merits the American Idol comparison). However, there are systemic issues with the evaluation process that leave study section members in the dark as to what they have collectively decided.

The ERC panel meetings, held in Brussels, employ an evaluation process that promotes a group effort to assess and rank all applications. For the Starter and Consolidator stage applications, in whose evaluation we have been involved, the review process has two steps—application review (Step 1) followed later in the year by an in-person interview (Step 2) of the applicants who pass the threshold at Step 1.

For the Step 1 meeting, each application is pre-reviewed by four panel members who provide preliminary scores and brief written evaluations about the proposal and the applicant. The subject areas covered are broad (e.g., stem cell biology, plant development, genetics, and biochemical reconstitutions of cellular processes have all been discussed in our ERC panel) and panel members are expected to review applications as generalists, which in our experience focuses the evaluation process on the bigger picture view of the proposal and of the applicant’s caliber. At the meeting, all applications are discussed; for each application the lead reviewer presents the application together with her or his evaluation of the proposal and the applicant, followed by brief comments from the other three reviewers before a discussion involving the panel as a whole. At the end of each discussion an informal preliminary score is obtained by counting hands in favor of an overall A, B, or C grade.

The critical part of the ERC panel meeting happens when the preliminary scores for all applications are tallied and a ranked list is generated. The panel then re-visits the entire set of applications and spends significant effort to adjust the ranking after having listened to the discussion of all applications. This is the most important part of the meeting when the critical “gray zone” applications are considered relative to each other and the decision on who to invite for an interview (in the Step 1 evaluation) or who to recommend for funding (in the Step 2 evaluation after the in-person interview) is made. In our panel, for the particularly difficult set of applications around the border (which is clearly defined at the start of the meeting), a paper vote is taken to finalize the ranking. Other panels likely use different mechanisms as each panel is given considerable leeway with respect to precisely how they want to address the ranking challenge. But importantly, by this mechanism the panel ends up working together as a group to develop a fair and robust process to “draw the line,” which leaves panel members with the satisfaction that they have done their job as a group of peer reviewers. For the applicants that do not succeed, the reviewing panel members provide a brief panel report highlighting the key reasons behind the panel’s decision, which helps applicants reframe applications for future consideration.

By contrast, as is likely familiar to many ASCB members, in an NIH study section half the applications are triaged, providing those applicants with only the written reviews, which are often divergent. For the reviewed applications, each of the three assigned reviewers first states her or his score followed by a brief presentation of the application by Reviewer 1. Reviewers 2 and 3 provide additional input, and the panel members are then invited to discuss the application. However, it is rare for non-experts to participate given the targeting of review assignments to experts and the need to complete application reviews in a short time period. At the end of the open discussion, the three reviewers re-state their scores, generating a score range. All panel members then privately enter a number into an online scoring sheet that is typically within the range; on occasion, a panel member chooses to score outside the range (he or she has to declare this intention to the panel). All scores count equally to determine the average final score. As has been noted before and will not be belabored here, forceful reviewers influence which end of the scoring range ends up being favored by non-expert panel members.

After the private score is entered by each reviewer, that specific application is never mentioned again in the study section. Unlike with ERC panels, the study section never gets the opportunity to consider all of the applications that they have been asked to review, which in our view underutilizes the collective strength of the panel members. Thus, at the end of the day study section members are in the dark about exactly what they decided for many of the gray zone applications. Effectively, they provide a somewhat random ranking in the middle range to a program officer yet percentile calculations based on the averaged scores cloak this uncertainty with an illusion of objectivity and numerical precision.

In our view, the key challenge of addressing relative ranking after all proposals have been discussed is a critical and important missing element in the NIH review process. We appreciate that conflicts of interest pose a challenge here—in the ERC panels this issue is addressed by an honor code of panel members staying silent when applications they may be in conflict with are being discussed (and, to be fair, the declared conflicts in both ERC and NIH panels rarely represent true conflicts of interest but rather reflect institutional affiliations or one-off multi-lab collaborations). In our experience, this honor code approach has worked remarkably well because reviewers approach their duties as professionals. This approach opens up the panel to work together as a group and provide a ranked list to the funding agency that follows the recommendation of the panel based on their collective expertise, as opposed to scores with minor fractional decimal variation that end up dictating the fate of many applications—and with it, the fate of many young careers.

While the ERC is only a decade old, its impact on science in Europe is becoming amply evident.9 Key to the ERC’s success is its establishment of a three-tiered application system that promotes young scientists and a fair and robust review process that demands significant commitment from the panel members but also provides them with the power of conducting peer review where the panel works as a group to draw the line. In our view, the NIH needs to consider experimenting with similar young scientist–targeted programs and evaluation systems where the reviewers are tasked with collectively ranking the applications under consideration, instead of perpetuating the statistical uncertainty created by the secret vote and the inability to revisit applications after their individual discussion.

Note
We thank Bruce Alberts, Lisa Dennison, and Chris Pickett for valuable input.


References
1Jones B, Reedy EJ, Weinberg RA (2014). Age and Scientific Genius. NBER Working Paper No. 19866. JEL No. J11,O31. www.nber.org/papers/w19866.
2Jones BF, Weinberg RA (2011). Age dynamics in scientific creativity. Proc Natl Acad Sci USA 108, 18910–18914.
3Ness R (2015). The Creativity Crisis. New York: Oxford University Press.
4Hyman A (2016). Encouraging Innovation. iBiology Magazine. http://bit.ly/2bFK5oA.
5Rockey S (February 13, 2012). Age distribution of NIH principal investigators and medical school faculty [blogpost]. National Institutes of Health Extramural Nexus. http://bit.ly/2cLdU7m.
6Alberts B (2009). On incentives for innovation. Science 326, 1163.
7Hyman AA (2013). Funding innovative science. Science 339, 119.
8Pagano M (2006). American Idol and NIH grant review. Cell 126, 637–638.
9Prove the worth of basic research [unsigned editorial]. Nature (2016) 535, 465.

 


Questions and comments are welcome and should be sent to president@ascb.org.

About the Author:


Max Planck Institute of Molecular Cell Biology and Genetics

ASCB President