Follow Datanami:
July 5, 2018

Baidu, Intel partner on AI workloads

Intel Corp. is teaming with Chinese e-commerce giant Baidu on a batch of AI projects spanning FPGA-backed workload acceleration, a deep learning framework based on Xeon scalable processors and implementation of a vision processing unit for retail applications.

The collaboration was announced during Baidu’s AI developers’ conference in Beijing this week.

As more cloud vendors look to accelerate machine and deep learning workloads, Baidu announced Tuesday (July 3) it would develop a “heterogeneous” computing platform based on Intel FPGAs. Along with boosting datacenter performance, the partners said Baidu would use the platform to offer workload acceleration as a service on the Baidu cloud.

Intel did not identify which FPGA series Baidu would use, but the chip maker recently announced the integration of its Arria family with its mainstream Xeon server chip.

Baidu (NASDAQ: BIDU) also said this week it would optimize its PaddlePaddle open source deep learning framework running on Xeon scalable processors, including tweaks for computing, memory and networking. The partners said they would explore the integration of the deep learning framework with the agnostic nGraph deep neural network compiler.

Read the full story at sister web site EnterpriseTech.com.

Datanami