Follow Datanami:
February 14, 2019

Frameworks Seek to Control AI

(Phonlamai Photo/Shutterstock)

AI governance frameworks are emerging as guard rails for controlling algorithms that are playing a growing role in human decision-making. Among the goals is managing the consequences of those decisions.

Business consultants and professional services firms in particular have focused on new ways to assess and control AI algorithms as a way of building trust. Among them is KPMG, which launched a new framework this week called AI in Control designed to assess algorithms underlying business applications to spot bias and enforce governance rules to ensure ethical AI.

The goal of KPMG’s framework is fostering AI algorithms that are accurate, addressing what the company warns is the current “trust gap” among business executives clamoring for “explainable AI.”

The new AI control approach is based on deep misgivings among business executives about the trustworthiness of their data and analytics. In a survey commissioned by KPMG, 92 percent of executives said they worry about the impact of data and analytics on their company’s reputation. More than 80 percent said they lack confidence in the governance of the resulting AI algorithms that underpin a growing number of enterprise applications.

“There is a clear need to develop a governance and control model for [AI] algorithms,” the survey found. “But the governance and assessment of algorithms is still in its infancy.”

On the assumption that data-driven businesses have managed to put the cart before the horse, KPMG and other business consultants have unveiled frameworks to assess risks while gaining a measure of control over AI algorithms. For example, federal contractor Booz Allen Hamilton released a machine intelligence “risk triage framework” last year that ranks the unintended impact of machine intelligence initiatives from mere annoyance to financial and psychological harm, culminating with high-risk initiatives threatening physical harm.

KPMG’s two-part framework offers a governance template for building, monitoring and, ultimately, controlling AI algorithms. All that can be done, the company said, without slowing technology innovation. A second tool conducts risk assessments to help determine whether an organization has effective control of its AI algorithms. Testing controls are designed to assess the design and implementation of explainable and fair AI programs.

KPMG said its platform is a product of collaboration with Cathy O’Neil, mathematician and author of Weapons of Math Destruction, which takes a critical look at the societal impact of algorithms. “Across the AI landscape, there is an urgent need to manage bias, fairness and accountability,” said O’Neill, who also heads a risk assessment and algorithmic auditing firm.

“Collaborations like this are an important way to begin to address these issues,” O’Neil added.

Recent items:

Making ML Explainable Again

Opening Up Black Boxes with Explainable AI

AI, You’ve Got Some Explaining to Do

Datanami