Can we anticipate graduate student success if we can't assess it?

How we choose the next generation of scientists is at the root of a sustainable scientific enterprise. The true value of a PhD may therefore be in training leaders who can advance science, while also gaining the necessary skills to succeed both during and after graduate school. A successful graduate of a PhD program must be able to contribute expertise or knowledge to advance a particular field. To attain this goal, they must possess skills such as critical thinking, problem solving, perseverance, conviction, and adaptability. These traits cannot be assessed by certain quantitative measures which graduate schools rely on during the initial stages of the admissions process. This practice eliminates otherwise promising candidates from the pool of applicants considered competitive for graduate school.

A graduate listens during the commencement at Yale Law School on May 23, 2011.

In recent studies, criteria used to measure graduate student success were productivity and progress in the graduate program. In January 2017, a study published by Joshua Hall et al. (also discussed in this podcast) measured graduate student productivity using the number of publications they had as a proxy. The study consisted of 280 graduate students who entered the Biological and Biomedical Sciences Program (BBSP), an umbrella admissions program at the University of North Carolina, Chapel Hill, from 2008-2010. The students were classified as either highly productive (3+ papers) or with average productivity (1-2 papers) based on the number of first-author publications during graduate school. The students with no first-author publications were further subdivided into groups with at least one middle authorship (0+ group) and those with no publications of any kind (0 group). They then compared several admissions metrics among the 3+, 1-2, 0+, and 0 groups of students, starting with the GRE. They found that GRE scores did not differ between these groups of students, but did differ by gender and race/ethnicity, indicating potential bias in this metric. Notably, students with below-average GRE scores regularly did well in terms of productivity, while students with near-perfect GRE scores were sometimes minimally productive in graduate school. Therefore, the GRE was a poor indicator of graduate student productivity in their study.

Another study published on the same day by Liane Moneta-Koehler et al. also found that GRE scores were not an accurate predictor of research productivity or progress in the graduate program. This study looked at 683 graduate students who matriculated in the Interdisciplinary Graduate Program (IGP), an umbrella admissions program at Vanderbilt University, from 2003-2011. Research productivity was defined as the number of first-author publications, the number of presentations, and the ability to obtain an individual grant/fellowship. Progress in the graduate program was defined by who would graduate with a PhD, pass the qualifying exam, and have a shorter time to defense. These two studies highlight the poor ability of GRE  scores to predict graduate student performance.

According to the Liane Moneta-Koehler et al. study, the GRE was also not an accurate predictor of GPA for graduate students in the IGP, whereas the GPA of graduate students in the BBSP did not differ among the student groups in the Joshua Hall et al. study. Overall, these findings on the GRE and GPA are not surprising given that the skills needed to perform well in graduate school differ greatly from those necessary to succeed in a classroom-type setting or on a standardized test.

Beside the GRE and the GPA, other criteria used by graduate admissions committees are previous research experience, recommendation letters, personal statements, and in-person interviews. A study by  Orion D. Weiner in February 2014 examined the differences in undergraduate metrics between student groups, including prior research experience. The study consisted of 52 graduate students who were ranked (31 highest-ranked and 21 lowest-ranked) based on the opinions of 30 faculty members with a significant history of graduate students in the Tetrad program at the University of California, San Francisco. The faculty was asked to identify the very best versus the most underperforming students in the past two decades in the program from their own labs, thesis committees, rotations, etc. As a result, the number of years of research experience conducted prior to graduate school was significantly higher for the highest-ranked students compared with the lowest-ranked students. Therefore, prior research experience did correlate with graduate student performance in this study.

In contrast, no significant difference was observed in the amount of previous research experience between the most productive and least productive student groups in the January 2017 study by Joshua Hall et al. In fact, the most productive group had the lowest mean, and the least productive group had the highest mean of previous research experience in months. However, as every student in the UNC Chapel Hill cohort had some previous research experience (6 months-1 year for the vast majority), they cannot conclude that research experience is unimportant for graduate school success from this study.

Qualitative measures such as recommendation letters and personal statements can reveal an applicant’s personal qualities, which are critical for success as a scientist. The value of recommendation letters from an advisor relates to the amount of time they have observed the graduate student in the laboratory. Ratings from recommendation letters indeed correlated with graduate school productivity, as the most productive students had higher letter ratings than least productive students according to the Joshua Hall et al. study. In fact, students with the best average recommender ratings published more first-author publications during graduate school than students with weaker scores. Thus, recommendation letter ratings are an accurate predictor of graduate student performance, translating into multiple first author publications. While not included in this study, personal statements can also provide valuable insights into an applicant’s motivations and goals for graduate school and may therefore prove to be another useful metric in this regard.

In contrast to ratings by recommendation letter writers, scores given by interviewers following one-on-one interviews with applicants who passed the first round of admissions did not distinguish between the most productive and least productive groups of students in the Joshua Hall et al. study. Students with the highest average interview scores also did not publish more papers during graduate school than those with lower interview scores. Thus, scores given by interviewers are a poor indicator of graduate student productivity from this study. When compared with recommendation letters from faculty members, interview scores may also carry less weight in assessing personal traits of future graduate students.

These studies highlight the idea that success of graduate school applicants should neither be evaluated solely on quantitative measures, nor should it rely on any single measure. Indeed, as Orion Weiner states, “There is a temptation to collapse everything down to a single number, which is not the most meaningful way, but we need to use more than the number of papers published…there is no substitution for the amount of hard work needed to evaluate the student at every level.” Practically speaking, taking a more holistic approach when evaluating applicants during the graduate school admissions process means “not overly weighting a single score or number, but really digging into an application and assessing the applicant’s experiences as a whole,” said Joshua Hall.

Overall, these studies raise questions such as whether metrics that show no correlation with graduate school success should still be used during the admissions process, and whether eliminating them could alleviate some of the bias in this process. On a broader scale, more time should be spent training graduate students in becoming better scientists, and this includes cultivating personal traits critical to their success both in graduate school and beyond. This practice will ensure that science continues to advance with the best and brightest minds at its forefront, and with those who are also the kinds of people we envision the scientists of tomorrow to be.

Acknowledgments: We thank Dr. Joshua Hall (UNC Chapel Hill) and Dr. Orion Weiner (UCSF) for providing quotes for this article. We also thank them for insights into the graduate school admissions process, which helped shape the article into a more comprehensive analysis of this topic.

References:

Hall JD, O’Connell AB, Cook JG (2017). Predictors of Student Productivity in Biomedical Graduate School Applications. PLoS ONE 12(1): e0169121. doi:10.1371/journal.pone.0169121

Hall JD (2017). “Does the GRE Predict Which Students Will Succeed?” [Audio podcast]. Retrieved from

http://hellophd.com/2017/01/065-does-gre-predict-which-students-will-succeed/

Moneta-Koehler L, Brown AM, Petrie KA, Evans BJ, Chalkley R (2017). The Limitations of the GRE in Predicting Success in Biomedical Graduate School. PLoS ONE 12(1): e0166742. doi:10.1371/journal.pone.0166742

Weiner, OD (2014). How should we be selecting our graduate students? Molecular Biology of the Cell, 25(4), 429–430. http://doi.org/10.1091/mbc.E13-11-0646

 

About the Author:


Adriana Bankston is a Principal Legislative Analyst in the University of California (UC) Office of Federal Governmental Relations, where she serves as an advocate for UC with Congress, the Administration, and federal agencies. Prior to this position, Adriana was a Policy & Advocacy Fellow at The Society for Neuroscience (SfN), where she provided staff support for special and ongoing projects, including SfN’s annual lobby event and the society’s annual meeting. In addition to working at UC, Adriana also serves as Vice-President of Future of Research, and is Chief Outreach Officer at the Journal of Science Policy and Governance. Adriana obtained her PhD in Biochemistry, Cell and Developmental Biology from Emory University and a Bachelor’s in Biological Sciences from Clemson University.
Gary McDowell is Executive Director of The Future of Research, Inc. (http://futureofresearch.org/), a nonprofit organization seeking to champion, engage and empower early career researchers with evidence-based resources to help them make improvements to the research enterprise. He is a COMPASS alumnus.