If you are a scientist, you know at least a little bit about the current crisis academia is suffering. The large number of PhD students inside a system that does not have enough academic jobs for all of them after they finish their postdoctoral training is alarming. It is also common to hear that competition for faculty positions at universities (and we are not talking only about Harvard, MIT, and Stanford) includes hundreds of qualified individuals for one job. Yes, only ONE. There are no doubts that the individual who will take the position offered is a highly qualified person for the job. As shown by Jessica Polka’s great piece in a COMPASS Blog, 53% of graduate students wish to be professors, but just around 10% will get their “dream job.” Sadly, PhD unemployment almost doubled in the last decade. It is still much lower than the regular unemployment range, but it is a significant yellow flag.
Articles and blog posts discuss the number of students entering the academic system, or the obvious necessity of alternative, non-professor careers in academia and elsewhere. The focus of this post is slightly different: It will be about the dark side of the academic selection system and how grants and publications can be critical pieces of the complex and broken system of academic selection during the hiring process of young faculty.
The excessive number of highly qualified individuals looking for an academic job makes the academic standards for a position exceptionally high. In principle this is a good thing, almost a Darwinian-selection of the academically fit. However, the parameters to determine the best candidate are not as impartial and unbiased as common sense would guess. Most search committees will look to two major points for a faculty candidate: publication record and funding history/potential. And here is where the problem starts—do all publications have the same weight? Innovative and unique scientific ideas are sometimes considered too risky for grant funding. Are the search committees more willing to hire an individual with a risky but innovative project, or safe projects that are clearly continuations of other researchers’ projects? These two questions can be answered in so many different ways, and depending on the answer the same faculty candidate can be excellent, average, or not good enough for a position.
1) Grants: innovation or “same old, same old?”
One simple number explains it all: 16.8%. This is the percentage of success when applying for an NIH grant in FY 2013. In 1997 the number was 30.5%. The amount of money given now is higher ($3.5 versus $1.8 billion), but 1,000 investigators lost their R01 grants last year. Moreover, the average age of principal investigators (PIs) at the time they receive their first R01 (or equivalent) is 42 years old, with only 3% of R01 holders at 36 years of age or less. The numbers clearly show that young investigators (despite efforts to specifically increase funding for early-career PIs) have a very hard time obtaining their first grant. The funding agencies ask for innovation, but in an effort to balance this with feasibility, they often prefer safe, secure research proposals. This scenario has a significant consequence during the selection of candidates for faculty positions: Young investigators trying to bring some new, and potentially risky, research may be not hired by research institutions/universities because the research is not perceived to be fundable. As a consequence, the diversity and creativity of scientific research has been trimmed to the trendy topics and standard experimental approaches, putting the fair science competitiveness in danger, as discussed in a previous post from the COMPASS Blog.
2) Publication record: A classic example of the difference between qualitative and quantitative evaluation
One of the most common conversations nowadays in the scientific community is how impact factor (IF) has been used to measure publication relevance and scientific strength. In the simplest analogy possible, I can exemplify the misuse of impact factors:
Strawberries are consumed by more people than watermelons. Thus, strawberries are better than watermelons.
In the end, IF cannot indicate by itself which publication is better. Science should be judged by the quality of the science. I can see IFs being important for publishers in order to change their journal marketing/promotion/accessibility strategies, but not as a mechanism to judge science quality. How many times do graduate students and postdocs hear the following sentence: “To get a job you will need to have a CNS paper”? There is no doubt that a publication in these top tier journals is a remarkable professional achievement. However, faculty search committees should use the publication record of a candidate as evidence of a compelling scientific story throughout the candidate’s career and his/her potential to bring new scientific knowledge to society. Important facts such as the number of collaborations, ability to challenge and change dogma, and innovative methods/approaches should be taken more into consideration than a rapid count of how many “good papers” person A has compared with person B. Thus the academic hiring system is at fault for using impact factor as a way to evaluate candidates.
And the consequence of this approach goes down the scientific pyramid; it is common now to see multiple graduate students in the same lab working on the same project just for the effort of having a multi-first author paper. It is also common to see graduate students and postdocs with little experience in writing manuscripts. The multi-first authorship is a very recent event, but I can see in near future co-authors from the same article trying to get the same faculty position.
How to fix the system? The most commonly discussed long-term solution is reducing the admission of graduate students into the system. In 2000 there were 493,000 graduate students in the US; in 2012 the number spiked to 627,000. To add to this problem, most of the graduate students in biomedical sciences want to be professors or investigator-type academic researchers. Fewer admissions will allow better training and mentorship for students who really want to pursue science. Moreover, universities and mentors should show their students (and postdocs) the possibility of pursuing non-faculty and non-academic scientific jobs, such as science policy, writing, industry, consulting as well as many other careers that currently absorb a big fraction of the biomedical PhD workforce.
Universities and research institutions should hire new faculty based on the scientific innovation and independence of the candidate. Candidates should be hired who have created transformational knowledge in their field and have a compelling story of scientific maturity evidenced by the quality of thier publications and awards. Universities should also be concerned about the mentorship skills of the faculty candidates. Good bench scientists can be terrible mentors. Universities should clearly state their method of candidate evaluation and stick to it throughout the hiring process. Alternative processes of faculty selection looking beyond CVs and IFs also needs to be done. Sandra Schmid, a past president of the ASCB and chair of the Deptartment of Cell Biology at UT Southwestern Medical Center, is a pioneer of this more “holistic” approach of faculty selection. Time will tell us the success of her proposed model.
Furthermore, the San Francisco Declaration on Research Assessment (DORA) signed by multiple academic institutions, points out that journal-based metrics should not be used for hiring decisions. However, we need to see more clear actions from the academic institutions regarding IFs and hiring.
Finally, scientific creativity and innovation ultimately need money. In times of the rampant “Ice Bucket Challenge” popular on social media, public awareness of the current budget cuts in biomedical research should be made as public as possible. More funding will allow universities and institutions to hire more people. This will produce more diverse and innovative science. And hiring does not necessarily mean just faculty positions, but also research scientist positions for people who don’t want to be PIs.
In conclusion, universities, research institutions, and scientific societies need to be clear to students and postdocs: The academic system is in crisis. Sounds somber, and it is. What should junior and senior level postdocs and students do? Think. A PhD can take you to multiple types of jobs— Is academia what you really want? Do you want to be a PI? It is valuable to talk to recently hired assistant professors, and to other people who are working on “alternative” science careers. Be open and clear about your concerns with your mentor.
As a scientific society, the ASCB is doing its job: If you are going to the ASCB Annual Meeting in Philadelphia, check out the multitude of career development events. They will give you a flavor of what your options are as a PhD and help you transition into the next stage in your career.
About the Author:
Christina Szalinski is a science writer with a PhD in Cell Biology from the University of Pittsburgh.