Advanced Computing in the Age of AI | Friday, March 29, 2024

Graphcore Touts 100x ML Speedup with PCIe Plug-In 

(Mcimage/Shutterstock)

Graphcore emerged from stealth mode this week with news of a $30 million Series A round to help finance ongoing development of its machine learning (ML) and deep learning acceleration solutions, including a PCIe card that plugs directly into a server’s bus.

The company says the combination of its development framework, called Poplar, and its PCIe-based Intelligent Processing Unit (IPU) can speed up ML and deep learning workloads by 10x to 100x.

The IPU card plugs into the PCI buses of standard X86 servers to provide a processing boost. Armed with multiple IPU cards, a company could enjoy the benefits of “massively parallel, low-precision floating-point compute” at “much higher compute densities” than other solutions.

Graphcore is positioning its IPU cards to take on the workloads that some are looking to run on more exotic hardware, such as graphics processing units (GPUs) or field programmable gate arrays (FPGAs). GPUs and FPGAs represent “stopgap measures” to solving the compute challenges posed by emerging machine intelligence applications, says Graphcore CEO Nigel Toon.

“Our IPU system provides a less restrictive, more efficient, and more powerful solution, making it easier and faster to produce applications, devices and machines that are much more intelligent and which can become more and more useful over time,” Toon says in a press release.

The main problem with GPUs is that they aren’t data-driven. “GPUs have been built to run programs that completely describe the algorithm,” Toon told CNBC. “Machine learning is different. You are trying to teach the system using data and that requires a different style of compute.”graphcore-logo

The company has plans to sell two pieces of hardware, including the PCIe-based IPU-Accelerator, as well as an IPU-Alliance that will focus on increasing the performance of the training and inference components of machine intelligence workloads. The company says it will start shipping the appliance next year.

Meanwhile, the company is also developing Poplar, the programming framework that developers will use to write applications to run on the IPUs. Poplar supports C++ and Python bindings, and builds on the TensorFlow and MXNet models that have become popular methods for writing deep learning applications.

Toon and company have been working on Graphcore products for the past two years. This month the company completed its first round of funding, with investments from Bosch, Samsung, Amadeus Capital, C4 Ventures, Draper Esprit, Foundation Capital & Pitango Capital.

Hongquan Jiang, a partner at Robert Bosch Venture Capital GmbH, sees promise in the Graphcore. “A new processor technology is needed for intelligent systems and Graphcore has the first technology that we have seen which really delivers the performance and efficiency needed for this style of compute,” Jiang says.

“Since its foundation C4 Ventures has been backing hardware companies revolutionizing their sector and we believe Graphcore’s disruptive technology is a game changer in the computing field,” says Pascal Cagni, founding partner of C4 Ventures.

Graphcore has offices in Bristol, UK, and Palo Alto, California.

EnterpriseAI