Follow Datanami:
July 14, 2021

Deci Shows a NAC for Automated Neural Net Construction

(ktsdesign/Shutterstock)

Deep learning researchers have been dancing around a looming performance wall in recent months, as huge neural networks push the limits in terms of computation and power consumption. But now a company called Deci says it has figured out a way around that wall by creating highly efficient neural nets with its automated architectural search technology called AutoNAC.

Deci today announced that its proprietary Automated Neural Architecture Construction (AutoNAC) technology “discovered” a new image classification model called DeciNets. When run against the ImageNet database, DeciNets demonstrated a very high level of performance in terms of its accuracy-to-latency ratio when running on Nvidia GPUs.

The Israeli company says AutoNAC discovered DeciNets using roughly two orders of magnitude less computing power than the massive, scale-out approaches based on Neural Architecture Search (NAS) technology. The NAS approach was used to describe EfficientNet, which is one of the networks that DeciNet bested in the benchmark results released today.

Deci says its AutoNAC software works across different machine learning domains, including classification, detection, and segmentation. Users input their data into the software, and AutoNAC uses it to generate a pre-optimized model that is ready to be deployed for inference. The models are hardware-aware, which Deci says helps squeeze the most performance out of cloud, edge, and mobile platforms.

For the benchmark, Deci ran DeciNet on a large Nvidia T4 GPU that can be deployed on prem or in the cloud, as well as the smaller Nvidia Jetson GPU that is designed for edge applications, such as autonomous driving. DeciNets came away with among the highest scores on the performance tradeoff for both T4 and Jetson GPUs, as measured by the tradeoff between accuracy and latency.

(Source: Deci)

“Each presented model was compiled to the T4 GPU using Tensor RT, and quantized to both 8-bit (INT8) and 16 bit (FP16) precision,” the company says in a white paper. “The new DeciNets architectures clearly dominate most of the architectures and advance the state-of-the-art for the T4 chip. Importantly, each of the DeciNets was discovered by AutoNAC using roughly four times the computation required to train a single network.”

By comparison, the EfficientNet architecture was discovered using a third-generation NAS technology that required “roughly two orders of magnitude more compute power for its optimization,” the company says.

“Deep learning is powering the next generation of computing,” said Yonatan Geifman, co-founder and CEO of Deci, in a press release. “Without higher performing and more efficient models that seamlessly run on any hardware, consumer technologies we take for granted everyday will reach a barrier.”

In the beginning, neural networks were mostly hand-built. However, as the size of the networks grew so large–for instance, GPT-3 has 175 billion parameters and nearly 100 layers–the task of figuring out the optimal way to construct the network and build the connections has fallen to computers. Deci calls its approach “AI to build AI.”

“Deci’s ‘AI that builds AI’ approach is crucial in unlocking the models needed to unleash a new era of innovation, empowering developers with the tools required to transform ideas into revolutionary products,” Geifman said.

Deci is selling both its AutoNAC technology as well as its DeciNets, which can be used by deep learning developers to create image classification systems that can be deployed into the real world.

Related Items:

Researchers Use Deep Learning to Plow Through NASA Snow Radar Data

One Model to Rule Them All: Transformer Networks Usher in AI 2.0, Forrester Says

Deep Learning: The Confluence of Big Data, Big Models, Big Compute

Datanami