Leveraging HPC Hardware to Run Next-generation Molecular Imaging Analysis Sponsored Content by Dell
Deriving the three-dimensional (3D) structure of biological macromolecules is critical to fighting cancer and other diseases. A deeper understanding of the structure can help researchers design inhibitors and develop new drugs to treat or cure patients.
While molecular imaging technology has improved over the years, the computational challenges have grown. Currently, cryo-electron microscopy (cryo-EM) is rapidly replacing the traditional X-ray crystallography method for elucidating the 3D structures of single biomolecules in a state that is much closer to their native form. A better computational method is needed to extract the 3D structure from microscope’s two-dimensional (2D) images.
This is an area where Dr. Youdong (Jack) Mao has focused his energy. He is using Dell PowerEdge servers with new Intel® Xeon Phi™ processor (a.k.a. Knights Landing) technology to develop a high-performance computing (HPC) molecular imaging analysis platform. The work aims to take advantage of higher performance capabilities of today’s multicore, parallel processor architectures.
Why HPC is required
Like many aspects of modern life sciences research, the analysis of molecular imaging involves large volumes of data. Using cryo-EM, a sample under inspection might have 50,000 to 100,000 single particles in random orientations, generating a massive amount of molecular images. Analyzing these images to determine the 3D molecular structure can take one million CPU hours.
That’s for one molecule and one experimental run. A research facility with three to five microscopes can produce 25 terabytes of raw data per microscope per day, which after processing results in a approximately 2 to 3 terabytes of data a day and somewhere in the petabyte range per year.
A second factor that impacts HPC requirements is the noisy data. Because biomolecules are highly sensitive to radiation damage by the microscope’s electron beam, the molecular images have to be taken at a low dose. This gives rise to an extremely high degree of noise in the formation of the image. In fact, the signal to noise ratio is 10 to 100 times lower than normal imaging data. As a result, researchers must use sophisticated averaging and machine learning techniques to classify the image and analyze the 3D structure of a sample.
These issues have limited research in the field. The place to start to improve the situation was to update the analysis software. “The software has evolved from code developed decades ago,” said Dr. Mao. Most of the software was designed to run on a single core and does not take the hardware capabilities of the Intel® Xeon Phi™ processor into account.
Dr. Mao has had a multi-year collaboration with Dell and Intel at both the Intel® Parallel Computing Center (IPCC) at Dana-Farber Cancer Institute (DFCI) and at Peking University. “We are trying to modernize the code,” said Dr. Mao. There are cases where the new code, leveraging multiple cores and hardware acceleration technology of the Intel chips, speeds up averaging by a factor of 1,000. “With the speed up, we can think about using more sophisticated software,” said Dr. Mao. For example, artificial intelligence and machine learning methods can be used.
“This opens up new frontiers,” said Dr. Mao. He notes that by harnessing the additional compute capacity, researchers can increase their image analysis throughput by an order of magnitude. Or they can choose to do a deeper analysis of their data. To that latter point, researchers can refine the classification of their images.
Looking to the future
The work in this area goes beyond simply updating old code. The ultimate goal is to develop a cutting-edge solution for the next-generation HPC platform for structural biology, based on Intel®Many Integrated Core Architecture and Intel® Scalable System Framework
Specifically, the research at IPCC at DFCI seeks to capitalize on the tremendous potential of Intel’s processor architecture in system design based on the Scalable System Framework, as well as heterogeneous parallel computing, to process a rapidly increasing volume of electron microscopy data.
One development from this work is the ROME (Refinement and Optimization via Machine lEarning for cryo-EM) software package. The open source ROME package is a parallel computing software system dedicated for high-resolution cryo-EM structure determination and data analysis, which implements advanced machine learning approaches in modern computer sciences and runs natively in an HPC environment. The ROME 1.0 introduces SML (statistical manifold learning)-based deep classification following MAP-based image alignment. It also implemented traditional unsupervised MAP-based classification and includes several useful tools, such as 2D class averaging with CTF (contrast transfer function) correction and a convenient GUI for curation, inspection, and verification of single-particle classes. The ROME system has been optimized on both Intel® Xeon multi-core CPUs and Intel® Xeon Phi many-core coprocessors.
Making use of Dell PowerEdge servers with the new generation of Intel® Xeon Phi™ processors, researchers have a powerful tool to expand their work in the life sciences. The platform can be used as a general resource for parallel computing applications in structural biology and molecular medicine. Specifically, the combination of Dell and Intel hardware with the optimized analysis software offers a system for the ultra-high-resolution reconstruction of single biomolecules in their native states.
For more information about accelerating life sciences research with new HPC platforms, visit www.dell.com/hpc
For more information on Code Modernization with the Life Sciences Community, visit www.intel.com/healthcare/optimizecode