Follow Datanami:
November 2, 2017

Inference Emerges As Next AI Challenge

(TZIDO SUN/Shutterstock)

As developers flock to artificial intelligence frameworks in response to the explosion of intelligence machines, training deep learning models has emerged as a priority along with synching them to a growing list of neural and other network designs.

All are being aligned to confront some of the next big AI challenges, including training deep learning models to make inferences from the fire hose of unstructured data.

These and other AI developer challenges were highlighted during this week’s Nvidia GPU technology conference in Washington. The GPU leader uses the events to bolster its contention that GPUs—some with up to 5,000 cores—are filling the computing gap created by the decline of Moore’s Law. The other driving force behind the “era of AI” is the emergence of algorithm-driven deep learning that is forcing developers to move beyond mere coding to apply AI to a growing range of automated processes and predictive analytics.

Nvidia executives demonstrated a range of emerging applications designed to show how the parallelism and performance gains of GPU technology along with the company’s CUDA programming interface complement deep learning development. Nvidia said Wednesday (Nov. 1) downloads of the CUDA API have doubled over the past year to 2 million.

“Becoming the world’s AI [developer] platform is our focus,” asserted Greg Estes, Nvidia’s vice-president for developer programs. Buttressing that strategy, the company (NASDAQ: NVDA) also this week announced expansion of its Deep Learning Institute to address the growing need for more AI developers.

As it seeks to put most of the pieces in place to accelerate AI development, Nvidia has also pushed GPU technology into the cloud with early partners such as Amazon Web Services (NASDAQ: AMZN). The company’s GPU cloud seeks to create a “registry” of tools for deep learning development. Estes added that the company has taken a hybrid cloud approach in which the deep learning registry runs in the cloud or on premises while deep learning frameworks such as Tensor Flow can be delivered via application containers.

Nvidia said its high-end Volta GPUs are available on the AWS cloud.

As the deep learning ecosystem comes together, Estes argued in a keynote address that the training of models to infer from huge data sets loom large. “AI inference is the next great challenge,” he argued. “It turns out the problem is pretty hard.” The scale of the challenge was illustrated by this statistic: There are an estimated 20 million inference servers currently crunching data to make inferences that run the gamut from educated guesses to reliable predictions.

Estes ticked off a growing list of emerging network designs that underpin current deep learning development, ranging from convolutional networks for visual data and recurrent nets for speech recognition to reinforcement learning and generative adversarial networks (in which two opposing networks seek to “fool” the other to, for example, spot a forgery).

Hence, Nvidia and its growing list of cloud and developer partners have been laboring apply to GPU parallelism and deep learning frameworks like TensorFlow to accelerate model training. The company released the latest version of its TensorRT AI inference software last month. The combination of Tesla GPUs and CUDA programmability are designed to “accelerate the growing diversity and complexity of deep neural networks,” Nvidia CEO Jensen Huang asserted.

Nvidia also uses its roadshows to demonstrate its GPU capabilities versus current CPUs. An inferencing example was intended to back its GPU performance claims. The demonstration involved sorting through aerial photos that were labeled according to land uses such as agriculture of an airstrip.

The company claims its approach could map an area the size of New Mexico in a day while a CPU platform would require three months to sort and organize the aerial photos.

Emerging deep learning tools could be applied to applications such remote sensing and other satellites imagery where emerging vendors are struggling to sift through hundreds of petabytes of data.

Nvidia partner Orbital Insight Inc. said it is combining satellite imagery with deep learning tools for applications like determining crop yields and urban planning. CEO James Crawford said it is using GPUs and deep learning tools such as convolutional models to process images taken by synthetic aperture radars that are able to see through clouds.

As part of an effort to quantify Chinese energy output, the San Francisco-based startup trained a neural network to spot the natural gas flares associated with fracking. Among the unintended but valuable finds delivered by the deep learning model was uncovering details of China’s industrial capacity, including the number of blast furnaces used in steel production.

Recent items:

Nvidia Eyes Gov’t Market With Deep Learning Expansion

Keeping Your Models on the Straight and Narrow

 

Datanami