Growing Focus on MLOps as AI Projects Stall
As a growing percentage of enterprise AI projects stall, data science platform vendors are teaming with cloud and data management specialists to move AI projects from the model-building stage to production workloads.
That’s among the goals of a partnership announced this week by Iguazio, the platform vendor specializing in machine learning pipeline automation, and NetApp Inc. (NASDAQ: NTAP), the hybrid cloud data service and management firm. The AI automation collaboration aims to simplify MLOps via the integration of Iquazio’s platform with NetApp’s ONTAP AI framework running on Nvidia DGX servers and NetApp’s all-flash cloud storage.
Along with data management, versioning and the inclusion of NetApp’s cloud volumes storage, the partners said the combination is also compatible with Kubeflow 1.0, the automation tool released in March to help developers train and scale machine learning workloads atop Kubernetes clusters.
The partners said the MLOps platform would allow customers to leverage GPU services as wells as Nvidia’s GPU cloud containers that include a runtime, libraries and operating system. The combination aims to automate pipelines across machine learning, deep learning and data analytics.
“The end result is a scalable way of processing data and computation,” Iguazio said.
The Iguazio-NetApp partnership represents a larger industry push to ramp up machine learning workloads that would, for example, leverage real-time data for predictive analytics applications. Along with automating machine learning pipelines, other players have called for open standards for machine learning operations. Late last year, Cloudera asserted the lack of MLOps standards was among the reasons why companies struggled to move models to production.
Separately, Core Scientific released an AI platform service on Monday (May 4) built on NetApp’s ONTAP infrastructure. The AI and blockchain specialist also said its data science cloud would be available through co-location specialist Equinix.