Seeing beyond sight: new computational approaches to understanding cells

Graham Johnson
Graham Johnson
Susanne Rafelski
Susanne Rafelski

Since Leeuwenhoek’s first illustrations were published, biologists have strived and struggled to extract understanding from the often noisy and nearly transparent interiors of cells. To better determine and comprehend all of the principles that drive cell behavior, we need to collect and integrate multiple types of data into unified dynamic 3D models of whole cells. Important new computational technologies are accelerating progress toward this goal. These new technologies, involving artificial intelligence, graphics processing units, and cloud computing, promise not only to improve quantification and visual analysis, but may also hold the key to connecting different modes and scales of microscopy to help the field of cell science develop an integrated, multiscale understanding of cell structure and behavior.

[A] computer…can generate a probability cloud inside of a newly provided cell image to show likely locations of many intracellular structures….

Merging and Comparing Image Data In an Age of Deep Learning

Most methods for seeing cell structures strike some unsatisfying balance between being biased, perturbing, or mysterious. We can fix a cell and visualize dozens of different structures at once in a highly perturbing fashion. Alternatively, we can minimally perturb a cell and watch it live, but we are confined to various transmitted light microscopy approaches or we limit or bias our view by choosing which two or three structures to tag for any given experiment.

Figure 1: Live hiPS cells in a monolayer are labeled for the cell boundary (magenta), DNA (cyan), and microtubules (white). Cells are imaged in 3D using spinning disk confocal microscopy and the image is displayed with: A) the most common type of volume rendering method as in this example from www.allencell.org/3d-cell-viewer.html; and B) newly adopted “cinematic lighting” techniques to volume rendering in development at the Allen Institute for Cell Science.

The goal, however, is to causally relate the structures in a living cell to one another when we cannot see more than a few of the structures at a time. This kind of integration can be achieved for cellular data by imaging subsets of structures one or two at a time and standardizing methods to superimpose cells in parallel so that the joint behavior of many structures can be observed. For example, cells going through mitosis can be sorted into discrete stages and compared by translating and rotating the images such that spindle center orientation is aligned. While straightforward conceptually, this low-fidelity approach can be challenging to do well. It needs a consistent way of acquiring and referencing the data for each of the separate intracellular structures so that they can be integrated. It also requires the biologist to formulate a mental model of how any two structures that are not imaged simultaneously can fit together within the same cell.

A new, alternative approach uses deep learning algorithms to determine statistical spatial relationships between intracellular structures and consistently labeled reference structures, such as the DNA and the cell membrane. When a computer has studied these relationships among thousands of such cell images, it can generate a probability cloud inside of a newly provided cell image to show likely locations of many intracellular structures within that same cell. This technique can be used to generate hypotheses for the colocalization of structures that have not been simultaneously imaged, as well as provide associated precision and confidence values.

[Approaches such as “cinematic rendering” make] exploring cellular data more analogous to dissecting a cadaver than to looking only at x-ray projections when learning human anatomy.

Deep learning or other machine learning approaches can also be used to teach a computer to predict images of structures directly from images obtained using relatively low-perturbing methods such as brightfield,1 differential interference contrast,2 or quantitative phase microscopy.3 For example, the brightfield approach has been used to view seven structures within the same cell without any of the problematic effects of tagging proteins or subjecting cells to light-induced photodamage. The applicability of these sorts of methods requires attention to the degree of accuracy needed for each specific application and subsequent validation of this accuracy.

Visualization Advances for Enhancing Spatial Relationships…Even Online

Simply looking at cells in 3D can be technically challenging, and visually analyzing 3D volumetric datasets can be mentally taxing. Once again, advances in computing power and techniques can help cell biologists access, analyze, and share this type of data more easily and more effectively.

Most volumetric visualization tools that cell biologists use evolved from 2D image analysis predecessors. They provide powerful analysis capabilities, especially for looking at one slice of a 3D volume or a projection, but they have limited 3D visualization functionality. Transforming a stack of 2D slices into a 3D volume and then interactively rotating it makes complex spatial relationships easier to see and understand.

The most common 3D volume rendering methods make the stacked image slices transparent so the observer can see through the entire volume. However, this can compromise the appearance and detail of the signal (Fig. 1A). Users can adjust the transparency to make denser voxels (the 3D analog to pixels) stand out, but this rendering approach is far removed from how we view and intuitively interact with everyday objects, lacking, for example, any concept of a directional light source, adjustable highlights, or shadows that can help us intuit complex topologies and distance relationships. The volumes can require longer interactions and more experience to interpret, and it can be impossible to correctly interpret depth relationships if the user cannot interactively rotate the volume, for example if it is displayed on a printed page.

The growing ubiquity of cloud computing, fast internet connections, and sophisticated personal computing devices is allowing us to share and explore large 3D datasets in new ways. Volumetric viewers can now run directly in a Web browser4 using a Web Graphics Library (webGL) that all of our devices have in common, or by streaming more computationally expensive and sophisticated renderings directly to the browser window from more powerful computers running in the cloud.

New approaches to rendering volumes are also being adapted and developed for use in cell biology to help us literally see the cells in a more familiar and thus intuitive manner. For example, by adopting approaches such as “cinematic rendering” techniques to cellular volumetric data, one can see fine details become more visible and spatial relationships more interpretable, even in a 2D static printed image (Fig. 1B). This makes exploring cellular data more analogous to dissecting a cadaver than to looking only at x-ray projections when learning human anatomy. These types of approaches should become an important addition to the arsenal of visual analysis tools needed to study and see a cell from every possible perspective and in every possible way. This cinematic approach can also be adjusted to emulate the method shown in (Fig. 1A) when transparency is useful.

Quantification in an Age of Machine Learning

To see cells and their interior components, we need to distinguish and measure them. Whether traced by hand with a camera lucida, or automatically generated with modern edge-finding algorithms, biologists have used segmentation for centuries to identify and define the boundaries of microscopic entities that are often distorted by lenses or low in signal. Segmentation not only boosts signals, it also helps quantify features of cells and visualize image data in meaningful ways, revealing the number and volume of individual structures, as well as their morphology and locations with respect to other cellular landmarks. Since two formal ways to identify causal, interpretable relationships are via perturbation and time series, segmentation algorithms must be robust so they can be applied even as perturbations and time change the properties of the structure that is being segmented. Thus easily accessible, accurate, and robust segmentation procedures are crucial for measuring quantitative features of cells to uncover patterns and interpret relationships.

We can envision the day when we can build a virtual cell that behaves like a real cell in every conceivable measure….

Traditional computer vision 3D segmentation algorithms and approaches potentially go quite far in their accuracy but often require substantial fine tuning of parameters, which is hard to automate and apply to large and changing datasets. Emerging machine learning approaches can be applied to these computer vision problems to address the segmentation challenges. For example, at the Allen Institute for Cell Science, we are adopting deep learning to improve segmentation. These approaches require large amounts of validated segmentation data for the algorithms to learn from that are especially hard to generate in 3D. However, one can begin with traditional 3D segmentations and then conjoin them with deep learning in an iterative fashion until a robust and accurate deep learning model is generated.

Putting Cell Data All Together—The Integrated Multiscale Spatiotemporal Dream

Developing an integrated multiscale understanding of cell structure and behavior remains one of the great challenges of cell biology. New computational technologies conjoined with large image-based datasets promise to bring us closer to this vision. Looking forward, we can already imagine modifying and applying machine learning approaches to building transfer functions that will conjoin images obtained at different spatial and even temporal scales. Could these spatial data alone reveal the principles by which cells organize themselves, converting genetic information into 3D dynamic cells, or will we need to layer on other types of data? We can envision the day when our community could build a virtual cell that behaves like a real cell in every conceivable measure, but this raises new questions about how we will query it, explore it, and make sense of it. We find these to be among the most exciting questions to contemplate in this increasingly fascinating era of technological breakthroughs in whole cell biology.

References

1Ounkomol C, Seshamani S, Maleckar MM, Collman F, Johnson GR (2018). Label-free prediction of three-dimensional fluorescence images from transmitted-light microscopy. Nature Methods doi: 10.1038/s41592-018-0111-2.

2Christiansen EM et. al (2018). In silico labeling: predicting fluorescent labels in unlabeled images. Cell 173, 792–803.

3https://nanolive.ch.

4https://bioimage.ucsb.edu

About the Author:


Susanne M. Rafelski is a team director at the Allen Institute for Cell Science.
Graham T. Johnson is a team director at the Allen Institute for Cell Science.