Follow Datanami:
January 30, 2020

Defenses Emerge to Combat Adversarial AI

George Leopold

via Shutterstock

As threats like deep fakes and data poisoning lurk, momentum is building for deploying trusted, pre-trained AI models with embedded security and defenses against the emerging threat known as adversarial AI.

Adversarial AI attacks that can corrupt data used to train AI models are on the rise as companies seek to scale AI applications. In response, the consulting firm Booz Allen Hamilton launched an enterprise AI software product late last year designed to accelerate deployment and management of trusted AI models at scale.

The Modzy platform and marketplace is a clearinghouse for secure models that incorporate proprietary defenses against adversarial AI.

Those pre-trained models are designed to provide “a predictable and repeatable way to rapidly deploy, manage and secure AI models at enterprise scale,” said Josh Sullivan, a senior vice president at Booz Allen Hamilton and Modzy executive leader.

The new product feature responds to the growing number targeted data poisoning attacks on neural networks. Security specialists note that deep learning algorithms must be hardened against such exploits before they can be deployed in mission-critical applications.

Data poisoning attacks, for example, are designed to manipulate application performance by inserting “poison instances” into training data. According to researchers at the University of Maryland, “a system can be poisoned with just one single poison image, and this image won’t look suspicious, even to a trained observer.”

In response, Booz Allen promotes its Modzy platform as meeting the growing demand for pre-trained AI models that can be used to scale automated applications. Those plug-and-play models would allow data scientists and DevOps teams to accelerate deployment and integrate trusted AI models into enterprise applications, the company said.

The models include proprietary adversarial defenses that filter poison data and scan trained models for vulnerabilities. The AI security technology was developed by Booz Allen partners that include Hypergiant Industries.

The AI startup based on Austin, Texas, works with government agencies and large companies on computer vision, natural language and other automation projects.

Booz Allen said Modzy is continually updated with new models via partnerships with Nvidia and AI vendors like Hypergiant and Orbital Insight, a geospatial analytics vendor based in Palo Alto, Calif. The platform also offers open source models as well as others generated by university developers. An open API also supports custom enterprise models.

Sullivan said Modzy was in development for two years before its release in November 2019.

The goal is “to help organizations discover, build and deploy models in the real world where it’s messy and it’s complicated and its often highly regulated,” Sullivan added.

Recent items:

Facing Up to Fake Imagery

Only Intelligent Data Can Power Artificial Intelligence

Do NOT follow this link or you will be banned from the site!