Follow Datanami:
November 5, 2020

Algorithmia, Datadog Team on MLOps

Tools continue to be introduced to allow machine learning developers to monitor model and application performance as well as anomalies like model and data drift—a trend one market tracker dubs “ModelOps.”

The latest comes from Algorithmia, which this week launched an enterprise platform for monitoring machine learning model performance. The Seattle-based MLOps and management software vendor also said it has partnered with DataDog on a pipeline designed to stream metrics through Apache Kafka to Datadog via its Metrics API.

Algorithmia’s enterprise platform provides access to algorithm inference and MLOps metrics. Along with improving model performance and compliance with data governance regulations, Algorithmia CEO Diego Oppenheimer said the monitoring platform helps reduce the risk of model failure.

Machine learning models are prone to flaws like data and concept drift, the latter a reference to instances where the value that a data scientist is measuring materially changes.

The Insights tools is designed to help users “overcome these issues while making it easier to monitor model performance in the context of other operational metrics and variables,” Oppenheimer said in unveiling the platform on Thursday (Nov. 5)

The platform is further promoted as combining execution time, request identification and other operational metrics with confidence, accuracy and other user-defined inference metrics used to identify and correct model drift. “The goal is to deliver metrics where they are most actionable by the teams responsible for these production systems,” the company said.

Meanwhile, Algorithmia’s partnership with New York-based Datadog reflects growing enterprise requirements to identify data and model drift along with model bias, data skews and negative feedback loops. The integration of the Insights platform with Datadog’s Metrics API would allow developers to correlate models against production infrastructure.

The partners also said the integrations addresses the current patchwork of manual processes and disparate tools required to monitor the performance of machine learning models used in enterprise applications. Centralized data collection and comprehensive monitoring are promoted as ways to avoid model drift, failure and performance shortfalls in response to shifts such as unexpected customer behavior.

The monitoring tools also reflect a wider enterprise transition to MLOps, or what market tracker Forrester calls “ModelOps,” defined as “tools, technology, and practices that enable cross-functional AI teams to efficiently deploy, monitor, retrain and govern AI models in production systems.”

A recent Forrester survey of the ModelOps sector listed Algorithmia among the key purveyers, along with ModelOp, Modzy, Quickpath and superwise.ai.

Recent items:

Staying on Top of ML Model and Data Drift

Keeping Your Models on the Straight and Narrow

A ‘Breakout Year’ for ModelOps, Forrester Says

Datanami