Berkeley Lab Scientists Create ML Pipeline for Interpreting Large Tomography Datasets
Jan. 26, 2023 — Advances in biological imaging have given scientists unprecedented datasets with extremely high resolutions, yet data interpretation tools are working overtime to keep up. This is particularly evident in the case of cryo-electron tomograms (cryo-ET), where the samples exhibit inherently low contrast due to the limited electron dose that can be applied during imaging before radiation damage occurs.
The segmentation of these cell tomograms remains a challenging task, one that is most accurately performed by human beings with an extensive amount of time on their hands. Since this isn’t a feasible way to interpret large datasets, a group of Berkeley Lab scientists recently developed and tested several machine learning techniques organized in a learning pipeline to segment and identify cryo-ET cell membrane structures. A paper describing their approach, “A machine learning pipeline for membrane segmentation of cryo-electron tomograms,” was published this month in the Journal of Computational Science.
“One of the main difficulties with these types of images is that they’re very noisy,” said Chao Yang, a senior scientist in the Applied Mathematics and Computational Research Division at Lawrence Berkeley National Laboratory (Berkeley Lab) and one of the paper’s authors. “It’s the main challenge when you are trying to detect some type of structure or segment the images — it could take one scientist several months to get one tomogram all segmented correctly.”
Although a number of automated segmentation algorithms and tools have been developed in the last few decades for high contrast medical 3D imaging, most of them perform poorly on cryo-ET because the datasets have a low signal-to-noise ratio as well as missing-wedge artifacts caused by the limited sample tilt range that is accessible during imaging. Given the complexity of the segmentation task and the inherent challenge in obtaining high-quality tomograms, the researchers knew that it was unlikely a single image processing or machine learning technique would produce satisfactory results, so they set out to develop an image analysis and segmentation pipeline that combined various methods.
The project was an LDRD-funded (Laboratory Directed Research and Development Program) collaboration with scientists Nick Sauter and Karen Davies from Berkeley Lab’s Molecular Biophysics and Integrated Bioimaging (MBIB) division. The team used the Cori supercomputer at the National Energy Research Scientific Computing Center (NERSC) at Berkeley Lab to test their methods and further refine the pipeline approach.
“There are a bunch of existing machine learning algorithms out there already, mostly related to medical imaging, but when you try to apply them to cryo-ET they just don’t work, mostly because of the low signal-to-noise ratio,” said Talita Perciano, a research scientist in Berkeley Lab’s Scientific Data Division and another paper co-author. “Recently there’s been a lot of development in using convolutional neural networks (CNNs) to solve these sorts of image segmentation problems, so we looked at those studies and tried them out and found we could get pretty good segmentation, but not perfect. So we knew we had to incorporate other machine learning methods as well.”
A Multi-pronged Approach
The research team knew that a human scientist can do a much better job than a computer program at segmenting and extracting membrane structures because the scientist has prior knowledge about the biological object to be segmented, so they incorporated that idea into their process via several machine learning methods:
- They combined multiple machine learning techniques to enhance the segmentation results produced by a CNN-based procedure. They used a popular CNN-based segmentation tool, U-Net, that identifies membrane structures to match the geometric motifs used in the training data from tomogram slices.
- They used reinforcement learning algorithms to connect multiple segmented pieces that belong to the same membrane structure.
- They applied classification algorithms to separate different membrane structures and place fragments of the same structure into the same group.
- They employed parametric and non-parametric fitting algorithms to produce a smooth and continuous surface representation of membranes.
“The neural networks leave gaps in certain areas, so we used reinforcement learning to sort of trace out what the contour might look like and then combined that with Gaussian process based machine learning techniques to smooth out the surface a little bit,” said Chao. “With this system in place that we’ve developed, we’re looking at going from this process taking months with real human biologists to it taking weeks, perhaps even just days.”
The biological impact of this type of machine learning system is a much more expansive insight into how various cell structures support its function. “If it were just one cell, we could do this by hand, but the real potential is to view the same cellular structure over the entire life cycle of the organism, and how it changes under different environmental conditions and external stimuli,” said Sauter, the MBIB senior scientist who co-authored the paper. “With large data, the new machine learning techniques can help discover how the diverse ensemble of structures support function in the cell.”
By combining various techniques, the researchers were able to develop a system that takes less time and delivers a better result, said Perciano. The approach worked well for segmenting membrane surfaces in two large biological datasets, and given that it is quite flexible it should be able to be applied to different datasets with minimal modification.
“Cryo-EM and tomography has exploded in the last decade, so scientists are getting a lot of structures, but these structures need to be interpreted,” said Chao. “So now the challenge is doing just that, and if machine learning tools can help speed up that process, it will have a big impact on biological research.”
About NERSC and Berkeley Lab
The National Energy Research Scientific Computing Center (NERSC) is a U.S. Department of Energy Office of Science User Facility that serves as the primary high-performance computing center for scientific research sponsored by the Office of Science. Located at Lawrence Berkeley National Laboratory, the NERSC Center serves more than 7,000 scientists at national laboratories and universities researching a wide range of problems in combustion, climate modeling, fusion energy, materials science, physics, chemistry, computational biology, and other disciplines. Berkeley Lab is a DOE national laboratory located in Berkeley, California. It conducts unclassified scientific research and is managed by the University of California for the U.S. Department of Energy. Learn more about computing sciences at Berkeley Lab.
Source: Keri Troutman, Berkeley Lab/NERSC