Follow Datanami:
October 17, 2018

More Tools Emerge for AI Deployment


As data scientists confront operational challenges that are slowing the transition of machine learning models to production, more vendors are stepping up with possible solutions for breaking up the logjam.

The latest is data science platform vendor Domino Data Lab, which rolled out the latest release of its flagship LaunchPad module this week. The 3.0 version specifically addresses “last mile” data science hurdles to streamline the model deployment process while speeding up ongoing improvements to production models.

Citing the slow rate of AI model utilization, Domino Data Lab and others are offering tools designed to bridge the gap between IT and DevOps teams. That is increasingly seen as the biggest operational challenge as data science teams struggle to push models to production.

The San Francisco-based startup, which announced a $40 million funding round in August, said Wednesday (Oct. 17) its latest module aims to help reduce the hassles associated with deploying models. Version 3.0 also seeks to accelerate the iterative design process for models in production to boost their business utility.

To that end, the new module provisions infrastructure via Docker application containers while expanding support for programming tools used for large data sets. Meanwhile, an automatic model versioning capability is designed to accelerate model iteration by reproducing the experimental history of individual models.

The result is faster model improvements that can move predictive and machine learning models more quickly to production, said Nick Elprin, Domino’s co-founder and CEO.

Other approaches to the “last mile” problem include new tools and migration options to push machine learning and HPC workloads to production for use in enterprise data applications. Those tools include cloud-based CPU, GPU and FPGA accelerators.

Nine of out ten respondents to a recent vendor survey released by workload management specialist Univa Corp. said they expect to use cloud GPUs to accelerate the shift of machine learning workloads to production. Meanwhile, another 80 percent said they will leverage hybrid cloud infrastructure for their machine learning projects as a way to reduce costs.

Domini Data Lab cited other surveys that found nearly 85 percent of executives polled said they are pursuing AI applications but only 5 percent said they actually use AI models extensively in the operations.

Vendors are also offering more tools to gauge model performance. For instance, Microsoft (NASDAQ: MSFT) announced earlier this month that the latest version of its machine learning framework allows .NET developers to use models based on the Open Neural Network Exchange format to predict the performance of trained models. The new version makes use of ONNX models trained in multiple frameworks ranging from ML.NET to TensorFlow that can be exported to ONNX. The resulting models can also be used for machine learning applications such as object recognition, Microsoft said.

Recent items:

Migration Tools Needed to Shift ML to Production

Microsoft Extends ML Framework