Follow Datanami:
May 23, 2018

Intel Adopts ‘Holistic’ Approach to AI

A developers’ conference is generally regarded as an indication the host company is all-in on a particular technology. Intel Corp.’s inaugural AI DevCon highlights its strategy of moving beyond its dominant position in the server and other processor markets to focus on “the AI-driven future of computing.”

The two-day event in San Francisco focused on forging a “holistic” approach to developing enterprise-scale AI applications while bringing together data scientists, machine and deep learning specialists as well as application developers.

Naveen Rao, general manager of Intel’s Artificial Intelligence Products Group, said a company survey found that more than half of chip maker’s U.S. enterprise customers are using cloud-based tools running on its Xeon processors for “initial” AI workloads.  Beyond Xeon CPUs, Intel also is expanding its AI portfolio to address diverse AI workloads running on its Nervana neural networking processor along with FPGAs.

(This week, Intel unveiled its Xeon 6138P processor that integrates its mainstream Xeon CPU server chip with its Arria FPGA.)

Rao stressed the performance of its line of Xeon Scalable Processors have optimized for machine learning model training and inference. The chip maker (NASDAQ: INTC) is betting the approach will entice customers to leverage their existing Intel CPU infrastructure to take “their first steps toward AI,” Rao said.

Intel also announced several AI initiatives this week with industry partners focused on deep neural networks for drug discovery and another on Internet of Things development. The latter project with C3 IoT  seeks to develop an “AI appliance” based on Intel’s development hardware.

On the software side, Rao said Intel is integrating deep learning frameworks such as TensorFlow and MXNet—two popular deep learning frameworks developed by Google (NASDAQ: GOOGL) and Amazon  (NASDAQ: AMZN), respectively—onto its nGraph universal deep neural network model compiler. The tool also supports ONNX, an open deep learning model standard spearheaded by Microsoft (NASDAQ: MSFT) and Facebook (NASDAQ: FB), which in turn enables nGraph to support PyTorch, Caffe2, and CNTK.

Meanwhile, company executives are championing a broad-brush approach to AI development, emphasizing the chip maker’s hefty investments in an expanding AI ecosystem. Intel CEO Brian Krzanich ticked off a list of investments in AI startups such as Data Robot and Lumiata. Intel Capital’s AI investment totals more than $1 billion, Krzanich said.

At the same time, the silicon leader is rolling out new scalable processors designed for AI workloads, including “purpose-built silicon” for deep learning training code-named “Lake Crest,” the Intel chief noted. “We are 100-percent committed to creating the roadmap of optimized products to support emerging mainstream AI workloads,” Krzanich declared.

Recent items:

Inside Intel’s nGraph, a Universal Deep Learning Compiler

Intel Details AI Hardware Strategy for Post-GPU Age

Datanami