Follow Datanami:
January 3, 2018

Developers Will Adopt Sophisticated AI Model Training Tools in 2018

James Kobielus

(fatmawati achmad zaenuri/Shutterstock)

Training is the make-or-break task in every development project that involves artificial intelligence (AI). Determining an AI application’s fitness for its intended use involves training it with data from the solution domain into which it will be deployed.

In 2018, developers will come to regard training as a potential bottleneck in the AI application-development process and will turn to their AI solution providers for robust training tools. Developers will adopt robust tools for training AI models for disparate applications and deployment scenarios. By the end of this coming year, AI model training will emerge as the fastest growing platform segment in big data analytics. To keep pace with growing developer demand, most leading analytics solution providers will launch increasingly feature-rich training tools.

During the year, we’ll see AI solution providers continue to build robust support for a variety of AI-model training capabilities and patterned pipelines in their data science, application development, and big-data infrastructure tooling. Many of these enhancements will be to build out the automated ML capabilities in their DevOps tooling. By year-end 2018, most data science toolkits will include tools for automated feature engineering, hyperparameter tuning, model deployment, and other pipeline tasks. At the same time, vendors will continue to enhance their unsupervised learning algorithms to speed up cluster analysis and feature extraction on unlabeled data. And they will expand their support for semi-supervised learning in order to use small amounts of labeled data to accelerate pattern identification in large, unlabeled data sets.

In 2018, synthetic (aka artificial) training data, will become the lifeblood of most AI projects. Solution providers will roll out sophisticated tools for creation of synthetic training data and the labels and annotations needed to use it for supervised learning.

The surge in robotics projects and autonomous edge analytics will spur solution providers to add strong reinforcement learning to their AI training suites in 2018. This will involve building AI modules than can learn autonomously with little or no “ground truth” training data, though possible with human guidance. By the end of the year, more than 25 percent of enterprise AI app-dev projects will involve autonomous edge deployment, and more than 50 percent of those projects will involve reinforcement learning.

(ktsdesign/Shutterstock)

During the year, more AI solution providers will add collaborative learning to their neural-net training tools. This involves distributed AI modules collectively exploring, exchanging, and exploiting optimal hyperparameters so that all modules may converge dynamically on the optimal trade-off of learning speed vs. accuracy. Collaborative learning approaches, such as population-based training, will be a key technique for optimizing AI in that’s embedded in IoT&P (Internet of Things and People) edge devices.

It will also be useful in for optimizing distributed AI architectures such as generative adversarial networks (GANs) in the IoT, clouds, or even within server clusters in enterprise data centers. Many such training scenarios will leverage evolutionary algorithms, in which AI model fitness is assessed emergently by collective decisions of distributed, self-interested entities operating from local knowledge with limited sharing beyond their neighbor entities.

Another advanced AI-training feature we’ll see in AI suites in 2018 is transfer learning. This involves reuses of some or all of the training data, feature representations, neural-node layering, weights, training method, loss function, learning rate, and other properties of a prior model. Typically, a developer relies on transfer learning to tap into statistical knowledge that was gained on prior projects through supervised, semi-supervised, unsupervised, or reinforcement learning. Wikibon has seen industry progress in using transfer learning to reuse the hard-won knowledge gained in training one GAN on GANs in adjacent solution domains.

Also during the year, edge analytics will continue to spread throughout into enterprise AI architectures. During the year, edge-node on-device AI training will become a standard feature of mobile and IoT&P development tools. Already, we see it in many leading IoT and cloud providers’ AI tooling and middleware.

About the author: James Kobielus is SiliconANGLE Wikibon‘s lead analyst for Data Science, Deep Learning, and Application Development.

Related Items:

Giving Machine Learning Freer Rein to Design Next-Generation Communications Protocols

Training Your AI With As Little Manually Labeled Data As Possible

Scrutinizing the Inscrutability of Deep Learning

Datanami