Follow Datanami:
September 14, 2022

MLOps Startup Diveplane Raises $25M Series A


Diveplane, an MLOps startup based in Raleigh, N.C., has announced it raised $25 million in Series A funding.

The company produces a suite of enterprise AI products that it says are designed around the principles of predict, explain, and show, creating user confidence that operational decisions are built on a foundation of fairness and transparency.

At a time when artificial intelligence spending is set to reach $62 billion this year alone, the ethical considerations of AI continue to be scrutinized. With their complexity, AI and ML models arrive at their predictions in sometimes mysterious ways, with sometimes highly private data, creating the famed “black box” style of predictive computing that proponents of explainable AI have been working toward illuminating.

This is also a time when companies are expecting maximum value from their AI initiatives, and it all comes down to accurate, explainable models. Diveplane supports the deployment and maintenance of machine learning models in production with tools the company says are trainable, interpretable, and auditable, and support use cases including prediction, anomaly detection, anonymization, and synthetic data creation.

Diveplane Reactor is a cloud-based ML platform that creates AI models based on historical data that can automate repetitive business tasks. The company says the output of the Reactor platform is “comprehensive, defensible, and clear about how it arrived at a certain decision and exactly what data informed that choice.” Human review features allow for identifying potentially biased data and explanations provided by Reactor are derived from a series of proprietary measurements, features which Diveplane says are entirely different from black box AI systems.

Diveplane’s Geminai product assists with data privacy by creating anonymized datasets for training AI systems. According to the company website, it does this through creating a verifiable synthetic ‘twin’ dataset with the same statistical properties of the original data, but without including the real-world confidential or personal information. Another product, Sonar, is a service that conducts a deep dive into data and AI models to identify outliers. These anomalies can lead to model drift, or a shift in accuracy in models, and Diveplane claims that Sonar’s forensic analysis ensures that drift and deviations are detected quickly so that action can be taken.

Diveplane’s founders, from left to right: Mike Capps, Chris Hazard, and Mike Resnick. Source: Diveplane

Diveplane was founded in 2017 by AI and gaming specialists Chris Hazard and Mike Resnick, along with former Epic Games president Mike Capps. The Diveplane platform is built on technology from Hazardous Software, a company founded by Hazard in 2007. Originally a gaming company, Hazardous Software went on to build AI-based strategy and decision support software for the U.S. Army before spinning off into Diveplane.

Diveplane’s $25 million Series A round was led by Shield Capital with participation from Calibrate Ventures, L3Harris Technologies, and Sigma Defense. The company plans to further invest in its AI solutions with an eye for meeting market demand.

“Chris, Mike, and the Diveplane team are building a leading technology platform to employ the power of AI while protecting privacy and explainability,” said Raj Shah, managing partner of Shield Capital. “We are excited to partner with them as their platform is foundational for large organizations to safely implement and scale AI.”

“We founded Diveplane with the mission of putting humanity back into AI, and we’re succeeding,” said Mike Capps, co-founder and CEO of Diveplane. “We’re building trusted partnerships, with a product set that provides a holistic capability for fair and transparent decision making and data privacy. This support adds rocket fuel to our business, so we can build on our successful approach to helping companies innovate with our Reactor platform.”

Related Items:

Organizations Struggle with AI Bias

Don’t Forget the Human Factor in Autonomous Systems and AI Development

Opening Up Black Boxes with Explainable AI