By Erika Shugart
How do we make decisions? And how can we do it better?
My husband and I will need to buy a car in the near future, and the thought fills me with dread. There are so many options! Investigating reliability, safety, and environmental impact, never mind the “driving experience,” then weighing all of that against our needs and the cost seems to take too many hours. I haven’t even gotten into the optional features and color. I once read that the number of cup holders in a car is a major determinant of its appeal to some car buyers. Last time my family needed to buy a car, we asked our neighbors for their advice and wound up with one of the million blue minivans that populate our suburban neighborhood. My husband and I hate it, but it is practical. I should have more expertise in automobiles, since I drive a car every day, but I have never taken the time to learn about them.
We are all busy, and no one has the time to become an expert in all things. In the lab, hypotheses are formulated and tested and evidence derived from experimentation is applied to decisions. This is the way of science. However, everyone takes mental shortcuts, even scientists. It’s human. But if scientists want to have the greatest impact outside of their immediate arena of research, they must approach these areas intentionally and apply evidence-based approaches that have been identified over the past decade, not fall back on shortcuts. We also need to be smart in the use of metrics. I encourage you to think about those areas in which you can take a more evidence-based, rigorous approach to improving your practice.
I believe that there are four areas in which this approach offers the greatest potential for impact:
- Communicating with non-scientist audiences, such as the public and policy makers,
- Increasing participation of underrepresented groups in science,
- And measuring the impact of research. I am going to touch upon each of these briefly and share resources that can get you started with learning how to take a more rigorous approach to your practice in these areas.
Communicating with Non-scientists
In my previous role as a science communication practitioner, I had the opportunity to work with marvelous scientists from disciplines as diverse as global climate modeling to decision making to cell biology. All of them had a deep-seated curiosity about the natural world and brought rigor to their approach to understand the phenomenon they studied. However, when it came to understanding and selecting communication approaches they often fell back on assumptions and gut feelings.
One of the first shortcuts that scientists sometimes take in the communication field is to adopt a one-size-fits-all solution to get the word out to the “public.” However, the public is not monolithic—reaching a science-interested, documentary-watching mother of two is quite different than reaching a 16-year-old social media maven. It is important to think carefully about what you are trying to accomplish and who might be interested in your message so that you can select an approach that will reach your audience.
Another common mistake that scientists make when communicating to the public is to assume that they can convince people to agree with them by giving them more information. Unfortunately, this is not the case. Often an appeal to emotion or to your audience’s sense of identity can be more effective. One way to get people excited about science is not to assail them with facts, but instead to appeal to their hearts. Science communication is an active field of research and can be used to inform our approaches to reaching out to non-scientist audiences. You can start to learn more about these topics and the wide range of social science literature that informs this area with the Sackler Colloquia on the Science of Science Communication (http://bit.ly/2b0SOxH), which were two meetings that brought researchers in a wide range of disciplines together with science communication practitioners.
As educators of the next generation of scientists, as well as of students who may never be scientists but who live in a world where they need to understand and use science, it is imperative that we do our best to teach them not just what we know but how we know it. When I was in university it was state of the art to pose an occasional Socratic question or explode a hydrogen-filled balloon in chemistry class, but straight lectures were de rigueur.
Today the field of biology education has demonstrated that there are better approaches to teaching. Have you heard of terms such as active learning, assessment, and backward design? If so, that is terrific and I hope your students’ assessments reflect your enlightened practice. Active learning builds on the understanding that students learn better when they are active in the process. This can range from simple activities such as think-pair-share, which pauses a lecture to allow students time to answer a question by working with a partner, to more complex approaches such as problem-based learning, in which students work collaboratively to understand a complex problem in biology. If you haven’t had the chance to learn about recent advances in biology education, you can start with ASCB’s education journal, CBE—Life Sciences Education (LSE; www.lifescied.org), which is available free online.
Science has a diversity problem. We need individuals of different genders, races, and ethnicities from different regions to participate in the scientific endeavor. Some of the approaches that are used to try to fix lack of diversity don’t work. For example, I was dismayed to read the results of a recent study (http://ftp.iza. org/dp9904.pdf) that examined the impact of stopping the tenure clock for parental leave. The study, which looked at the top 50 economics departments, found that gender-neutral parental leave policies resulted in a 22% decrease in the number of women obtaining tenure and a 19% increase in the number of men receiving tenure. While the results are the opposite of the goals of these common programs (and it remains to be seen if these findings will have an impact on university policy), the approach is exactly what we need to be doing—try new programs that we hypothesize will help the situation, examine their impact, and change them if needed.
In addition to learning from experiments in the academic world, we can learn from the business world, in which increasing diversity is a very active area. Companies have been measuring diversity in their ranks and the impacts of programs such as diversity training and mentoring for many years. The Harvard Business Review’s July-August issue (https:// hbr.org/archive-toc/BR1607; pay wall) focused on diversity and featured an article on research that examined practices of midsized U.S. companies. It was found that voluntary diversity training works better than mandatory training. Targeted college recruitment practices and diversity taskforces were other initiatives that worked. By building on the lessons learned in different sectors we can make the needed changes to create a diverse workforce in the sciences.
Evaluating Science and Scientists
The final area that can be improved by a more rigorous approach is scientists’ evaluation of each other. Metrics are invaluable for assessing the impact of programs and in decision-making, but when we depend too heavily on a single metric to the exclusion of other data it can lead to poor choices. This is epitomized in science’s over-dependence on the journal impact factor (JIF) for decisions about departmental worth, promotion, and tenure. As I discussed in a note in the ASCB Post in July (www.ascb.org/note-bias-novelty), many of the most novel papers in science receive most of their citations five years after initial publication, and they are published in lower-JIF journals.
The JIF is a metric, but it is not the best metric for measuring the worth of an individual paper or even the worth of a journal. This is why ASCB was one of the lead organizers of the San Francisco Declaration on Research Assessment (DORA; www.ascb.org/dora). As outlined in the DORA there is a better approach we can take to measure the worth of research. We can look at article-level metrics, rather than journal-level metrics. We can judge the merits of the science by the content of the paper, rather than assessing its worth solely by where it was published. We can look at our colleagues’ broader scientific contributions, such as datasets and tools, rather than make decisions only on published articles. The movement to stop over-reliance on JIFs is gaining momentum. In recent weeks a major society publisher, the American Society for Microbiology, stopped advertising JIFs (http://bit.ly/2aVCC50), and Nature has just announced (http://go.nature.com/2biQra3) that it will present a wider range of metrics due to the limitations of the JIF. I encourage you to become a signer of DORA and begin to put the declaration into practice.
There are some things in life that can’t be improved by a scientific approach, such as love or enjoying a beautiful piece of art. I have briefly touched upon four diverse topics that can benefit by the use of the practice and rigor of science. I hope that you will take time to consider what you might do in these areas. Meanwhile, I will work to improve my approach to buying cars, even if this means developing a formula for the optimal number of cup holders my family needs.