Inside Advanced Scale Challenges|Friday, September 21, 2018
  • Subscribe to EnterpriseTech Weekly Updates: Subscribe by email

Baidu, Intel Expand Collaboration on AI 

Intel Corp. is teaming with Chinese e-commerce giant Baidu on a batch of AI projects spanning FPGA-backed workload acceleration, a deep learning framework based on Xeon scalable processors and implementation of a vision processing unit for retail applications.

The collaboration was announced during Baidu’s AI developers’ conference in Beijing this week.

As more cloud vendors look to accelerate machine and deep learning workloads, Baidu announced Tuesday (July 3) it would develop a “heterogeneous” computing platform based on Intel FPGAs. Along with boosting datacenter performance, the partners said Baidu would use the platform to offer workload acceleration as a service on the Baidu cloud.

Intel did not identify which FPGA series Baidu would use, but the chip maker recently announced the integration of its Arria familywith its mainstream Xeon server chip.

Baidu (NASDAQ: BIDU) announced a similar agreement with GPU vendor Nvidia (NASDAQ: NVDA) during last year’s AI developers’ conference, including plans to bring Nvidia’s Volta GPUs to the Baidu cloud. Baidu also announced last year it would tailor its PaddlePaddle open source deep learning framework for Volta GPUs and bring AI capabilities to the Chinese consumer market.

The Chinese technology giant also said this week it would optimize PaddlePaddle running on Xeon scalable processors, including tweaks for computing, memory and networking. The partners said they would explore the integration of the deep learning framework with the agnostic nGraph deep neural network compiler.

The model compiler, which Intel (NASDAQ: INTC)  released nGraph as open source this spring, supports multiple deep learning frameworks on the front-end, and compiles optimized assembly code that runs on multiple processor architectures on the backend.

“Data scientists can write once [with nGraph], without worrying about how to adapt their [deep neural network] models to train and run efficiently on different hardware platforms,” Intel said.

Baidu’s use of Xeon would make “it simpler for PaddlePaddle developers to code across platforms,” added Gadi Singer, vice president of Intel’s AI Products Group.

Meanwhile, the companies also said they would collaborate on a vision processing unit aimed at retailers. Baidu’s Xeye camera will incorporate Intel Movidius Myriad 2 VPUs for “visual intelligence” applications. The project would combine Baidu’s machine learning algorithms with the customized Movidius VPUs to detect and analyze “objects and gestures” along with shoppers. The goal is to provide “personalized shopping experiences” in retail outlets.

Baidu’s heavy investment in AI research has yielded algorithms that among other applications can scan video clips to recognize and classify actions. The company said last year applications include software that could more accurately screen hours of footage from security cameras.

At its own AI developers’ conference in May, Intel rolled out a “holistic” approach to advancing enterprise-scale applications. To that end, it unveiled new scalable processors designed for AI workloads, including “purpose-built” silicon code-named Lake Crest designed for deep learning training.

About the author: George Leopold

George Leopold has written about science and technology for more than 30 years, focusing on electronics and aerospace technology. He previously served as executive editor of Electronic Engineering Times. Leopold is the author of "Calculated Risk: The Supersonic Life and Times of Gus Grissom" (Purdue University Press, 2016).

Add a Comment

Do NOT follow this link or you will be banned from the site!
Share This