Chip Startup Accelerates AI with MRAM
The AI chip market continues to heat up with the recent release of a series of AI accelerators with embedded memory that target the booming edge computing sector.
Gyrfalcon Technology Inc. (GTI) said its “production-ready” ASIC, dubbed the Lightspeeur 2802M, incorporates low-power embedded MRAM as the Silicon Valley startup zeroes in on what it calls “Edge AI.” Taiwanese chip foundry TSMC (NYSE: TSM) began manufacturing the 22-nanometer design last June, incorporating its embedded MRAM process technology.
The combination leverages on-chip memory as an AI processor, reducing data movement among edge devices while accelerating the processing of AI models.
The startup based in Milpitas, Calif., emphasizes the ratio between accelerated AI performance and the low-power, non-volatility attributes of its proprietary “MRAM engine.” The 40 MB embedded memory is designed to support either large AI models or multiple models on a single ASIC. Those models could include image classification, facial recognition or voice commands.
Launched in December, Lightspeeur 2802M is the first in a series of planned embedded MRAM chips the startup plans to extend to what it refers to as “super AI accelerator chips.” The architecture pairs GTI’s MRAM engine with a “matrix processing engine” based on convolutional neural networks as well as its “AI Processing in Memory” framework.
“This optimizes the speed of processing, achieving high [theoretical operations per second] performance, while also saving tremendous amounts of power by avoiding management of data in discrete memory components,” the company said.
The Lightspeeur AI accelerator technology also has been incorporated into a USB stick. The startup touts the Laceli AI compute stick as delivering better performance with 90x the power efficiency compared to Intel’s (NASDAQ: INTC) Movidius Neural Compute Stick unveiled in July 2017.
The GTI accelerator chips support AI models based on Caffe, TensorFlow and other tools to develop neural networks and other machine learning frameworks, the company said.
Along with AI accelerator competitors Intel and graphics chip leader Nvidia (NASDAQ: NVDA), the AI chip startup also is aiming variations of its Lightspeeur design at cloud servers.
GTI claims its 16-chip server (based on the 28nm Lightspeeur 2803S) can outhustle Nvidia’s Tesla processor both in terms of performance and power efficiency, delivering 271 TOPS at 28 watts (as reported by Synced).
As proliferation of edge devices boosts requirements for in-memory processing, approaches such as GTI’s also are boosting the prospects for MRAM, which stands for magneto-resistive random-access memory. As AI applications proliferate, the power, density and data-preserving attributes of MRAM may help it surpass dominant SRAM technology, observers note.