Advanced Computing in the Age of AI | Thursday, March 28, 2024

Intel to Acquire AI Startup Nervana Systems 

If we needed another sign that Intel is serious about mining AI market opportunities, it came today when the chip company announced it had inked a “definitive agreement” to acquire artificial intelligence and deep learning company Nervana Systems. Financial terms haven’t been disclosed yet, but a source familiar with the deal told Recode it’s worth more than $350 million.

Founded in 2014, the San Diego-based Nervana has differentiated itself by building a full-stack solution that goes from the algorithms to the silicon to solve and accelerate machine learning problems at scale. Now Intel plans to leverage Nervana's IP and expertise to expand its AI capabilities.

“We will apply Nervana’s software expertise to further optimize the Intel Math Kernel Library and its integration into industry standard frameworks,” said Diane Bryant, Intel’s executive vice president and general manager of the Data Center Group. “Nervana’s Engine and silicon expertise will advance Intel’s AI portfolio and enhance the deep learning performance and TCO of our Intel Xeon and Intel Xeon Phi processors.”

“Nervana intends to continue all existing development efforts including the Nervana Neon deep learning framework, Nervana deep learning platform, and the Nervana Engine deep learning hardware,” reported Naveen Rao CEO and Co-founder of Nervana on the company’s website. “The combination of Nervana’s technology and expertise incorporated into Intel’s portfolio will take deep learning/AI solutions to the next level.”

With the official launch of the “Knights Landing” Phi in June, Intel is pushing hard to establish itself as the dominant silicon vendor for machine learning workloads. In order to do so, it will have to get around NVIDIA, which has made a name for itself in the deep learning community with GPUs such as the Titan X, the GTX 1080 and the high-end Telsa K40 and K80. Intel claims that its processors power more than 97 percent of servers deployed to support machine learning workloads today, but GPU-heavy nodes are doing a lot of the heavy lifting. The growing need for machine learning at scale is also stimulating custom silicon efforts, such as Google’s highly-publicized Tensor Processing Unit (TPU).

The 48-person Nervana team will join Intel’s Data Center Group after the deal closes.  “We will continue to operate out of our San Diego Headquarters and will retain our talent, brand, and start-up mentality,” said the Nervana CEO, adding that work is underway on a “revolutionary new architecture” for deep learning.

About the author: Tiffany Trader

With over a decade’s experience covering the HPC space, Tiffany Trader is one of the preeminent voices reporting on advanced scale computing today.

EnterpriseAI