Follow Datanami:
October 22, 2021

ML Needs Separate Dev and Ops Teams, Datatron Says

(Clare-Louise-Jackson/Shutterstock)

In the machine learning world, the folks developing models often are the same folks who are tasked with running the models in production. And they often use the same end-to-end ML software stacks. But emerging best practices around operations and governance demand a strict separation of those activities, including personnel and stacks, the CEO of MLOps startup Datatron says.

“A lot of vendors out there developed this operations part as part of the development lifecycle,” says Harish Doddi, CEO of Datatron, a San Francisco software company founded in 2016. “But ultimately the way we see things is, when models go from development to production, [there needs to be] a separation of duties, or boundaries.”

There are several reasons why the development team and the operations team should be composed of different people using different tools and technology, says Doddi, who cut his teeth on managing ML systems at Lyft, Snap, and Twitter before founding Datatron with Jerry Xu.

For starters, data scientists are constantly trying out different tools and technologies, which is just the nature of the fast-moving discipline.

“My experience is there is often friction between data science organization and the ops side of the organization, and the reason for that is data scientists always change. They always adopt new tools, adopt new things–whatever makes them move fast,” Doddi says. “But ops people, the mindset is very different. The mindset is all about stability, reliability. They’re not going to change things that work.”

Operations has different priorities than development (Gorodenkoff/Shutterstock)

Building tight links between the operations side of the house–including monitoring and managing machine learning models in production–and the development side of the house is a mistake, Doddi says, because it unnecessarily limits what tools data scientists can use.

“Our conviction is the model development has to be completely independent from the production part,” he says. “People should have the flexibility to use whatever tools that they want to use in the model development, whether it is open source tools like Python, PySpark, whether it is commercial tools like H2O, or whether it is legacy tools like SAS or R.”

Datatron’s software, which it calls the Datatron Reliable AI platform, focuses on the deployment, monitoring, management, and governance of ML models in production. Once the data science team hands the models over to the MLops folks–whether it’s in the form of a H2O MOJO or POJO file, a pickle file for a Scikit-learn object, or a .sas file in a SAS environment–that’s where Datatron’s software kicks in.

“We take a model that has been developed by a data scientist in their environment and really add the enterprise-grade production,” Doddi says. “And that means things like containerization of the models, auto scaling of the models, infrastructure management, resource management–all of these things are automated through the platform.”

Compared to other types of enterprise software, ML models are a different beast entirely. When something goes awry in an ML model, the debugging process can take quite a bit of time. Datatron has built automation into its software to help MLops teams handle a failed model, including implementing backup models or switching to a previous version.

Datatron’s software gives MLops folks additional control over their models. For example, it supports automated A/B testing and champion-challenger capabilities to compare multiple models in a real-world setting. It also offers a “shadow testing” model that lets users see how a new or updated model will behave once it’s put into production. It also enables MLops folks to identify backup models to be used in case the first model fails.

It’s all about enabling a customer to create their own “factory of models” for machine learning, Doddi says. “This factory needs to be maintained reliably in a production environment, because if things go wrong…then there is a possibility that your models can make decisions that could be disastrous to the business,” he says.

Unlike regular software debugging, debugging ML models is very hard, Doddi says. Data scientists may ultimately be called into investigate whey their model is misbehaving, but that process can take months to complete. In the meantime, the show must go on.

“Production is always a moving target. A lot of times you observe things first time in production,” Doddi tells Datanami. “For example, you have a model meant for a particular state like California. For some reason the model is getting data points from Arizona. Now the model still work, but is it working correctly? No, it’s not working right. A lot of things can go wrong with production. That’s why our philosophy is you need a scalable and reliable environment so if things go wrong [there are] failsafe techniques.”

Datatron co-founder and CEO Harish Doddi

Another risk that ML-using companies must deal with revolves around emerging regulations in data and AI. Regulations are not rolling out across the world in an equal manner, and companies are demanding greater control over their ML models to ensure they don’t get tripped up with fast-moving legislation. That’s why Datatron is also bringing model governance to bear alongside its monitoring and management of models.

“For Facebook or the consumer tech companies, there’s no concept of privacy. You search for something, you’re going to get the ad come up,” he says. “But enterprise data is very different. For example, you have a lending model, let’s say, that’s showing some sort of discrimination against a particular ethnicity. That is subject to lawsuits. So that’s the legal component of the equation.

In addition to monitoring for the accuracy and performance, Datatron’s software also monitors the models for signs that they may be drifting into an area where they are perpetrating bias against customers.

The software communicates some of this information to the MLops personnel through a “model trust score” that features a scale of 0 to 100.  “It actually represents how much trust is there of this particular model, how reliable is this model,” Doddi says.

Earlier this week, Datatron unveiled a new release of its Reliable AI platform. The new release includes a handful of new features, including support for ML Gateways that simplify the deployment of ML models in complex, multi-tenant environments; support for customer-defined KPIs for ML monitoring; new explainability capabilities designed to enhance trust in ML models; native support for Juypter notebooks, and a new rapid setup and deployment process that supports deployment of APIs for real-time or batch inferencing in less than 10 minutes.

Datatron is a young company and doesn’t appear to have a lot of customers at this point. But judging by its logo sheet, which features Johnson & Johnson, Comcast, Ford, and Domino’s Pizza, some of its customers are sizable enterprises.

One happy Datatron customer is Zack Fragoso, who manages data science and AI activities at Domino’s. “At Domino’s, we understood very early on that for our AI initiatives to be successful, it was important to bridge the skill sets gap between the different data scientist teams and IT organizations,” Fragoso says in a press release. “Not only does Datatron’s platform make this possible, but it also enables us to implement strong MLOps to rapidly operationalize our machine learning models.”

More news is expected soon from Datatron.

Related Items:

A ‘Breakout Year’ for ModelOps, Forrester Says

It’s Time for MLOps Standards, Cloudera Says

Growing Focus on MLOps as AI Projects Stall

 

 

Datanami