Follow Datanami:
December 9, 2019

Real Progress Being Made in Explaining AI

(FGC/Shutterstock)

One of the biggest roadblocks that could prevent the widespread adoption of AI is explaining how it works. Deep neural networks, in particular, are extremely complex and resist clear description, which is a problem when it comes to ensuring that decisions made by AI are made fairly and free of human bias. But real progress is being made the explainable AI (XAI) problem on several fronts.

Google made headlines several weeks ago with the launch of Google Cloud Explainable AI.  Explainable AI is a collection of frameworks and tools that explain to the user how each data factor contributed to the output of a machine learning model.

“These summaries help enterprises understand why the model made the decisions it did,” wrote Tracy Frey, Google’s director of strategy for Cloud AI, in a November 21 blog post. “You can use this information to further improve your models or share useful insights with the model’s consumers.”

Google’s Explainable AI exposes some of the internal technology that Google created to give its developers more insight into how its large scale search engine and question-answering systems provide the answers they do. These frameworks and tools leverage complicated mathematical equations, according to a Google white paper on its Explainable AI.

One of the key mathematical elements used is Shapley Values, which is a concept created by Nobel Prize-winning mathematician Lloyd Shapley in the field of cooperative game theory in 1953. Shapley Values are helpful in creating “counterfactuals,” or foils, where the algorithm continually assesses what result it would have given if a value for a certain data point was different.

(MY-stock/Shutterstock)

Andrew Moore, who heads up Google Cloud‘s AI division, explained to the BBC how this math contributes to explainable AI.

“The main question is to do these things called counterfactuals, where the neural network asks itself, for example, ‘Suppose I hadn’t been able to look at the shirt colour of the person walking into the store, would that have changed my estimate of how quickly they were walking?'” Moore told the BBC last month following the launch of Explainable AI at an event in London. “By doing many counterfactuals, it gradually builds up a picture of what it is and isn’t paying attention to when it’s making a prediction.”

This counterfactual approach is so powerful that it renders the explainability problem essentially moot, Moore said. “The era of black box machine learning is behind us,” he told the BBC.

However, there are some limitations to Google’s XAI. For starters, it’s compatible only with the TensorFlow machine learning framework. And the model has to be running on Google Cloud. While this certainly gives Google Cloud a valuable competitive advantage over its public cloud competitors Microsoft Azure and Amazon Web Services – which are moving aggressively to build their own AI systems — it doesn’t benefit companies that don’t want to run on Google Cloud (we are told there may be several still).

That’s where Zest AI comes in. The Burbank, California machine learning software company, which was founded by former Google engineers, has taken the same well-established mathematical concepts created by Shapley and his colleague Robert Aumann – another Nobel Prize-winning mathematician – and made them available to its clients in the financial services industry.

Jay Budzik, Zest AI CTO, gave us the low down on how it all works:

“Google introduced an algorithm called integrated gradients , which is really just a re-packaging of a technique from cooperative gaming theory,” Budzik told Datanami. “The mathematics that Shapley and his colleague Aumann describe enables you to accurately quantity the contributions of those players.”

By substituting variables in the machine learning models for players in a game, the Aumann-Shapley approach can be used to assess the contributions of each variable to the result of the model as a whole. This is the “sophisticated math” at the core of their XAI approach.

“But where they have some shortcomings is when they try to explain these combinations of neural networks and tree-based models, like gradient-boosted trees, that aren’t as easy to analyze,” Budzik said. “And so we’ve extended their technique, integrated gradients, with a new technique called generalized integrated gradients. We’ve developed our own set of tools that are a little more directly applicable for financial services with the kinds of models that we build.”

Zest AI’s technology works with any machine learning framework, including scikit-learn and PyTorch, in addition to TensorFlow, which Budzik admitted was “best of breed” for machine learning. The Zest AI solutions is presented in the form of a Jupyter data science notebook. What’s more, it also generates the documents that banks and lenders need to present to regulators to ensure that it’s within the guidelines presented by the Federal Reserve.

A key element of Zest AI’s offering is the flexibility that allows it to handle complex decision-making systems, Budzik said.

“When your model is not just a deep neural network but a combination of several deep neural networks and several gradient-boosted trees of different modeling techniques, maybe those models are combined by another neural network that understands when each model is stronger and is more accurate and selects which models to pay attention to when,” he said. “That system of models, and their application to things like a loan decision – that whole system has to be studied in order to make an accurate assessment of how the decision is being made.”

It’s cool to see Google investing in this problem because, it’s held machine learning back from being used for really important problems, Budzik said. Now Zest AI is taking the XAI ball and moving it a little further along to a potential solution.

These aren’t the only vendors tackling the explainability problem in AI. And in fact, many large companies are taking on the problem themselves. But as AI and machine learning becomes more widespread, standardized approaches to solving the explainability problem will be needed, and the approach offered by Google and Zest AI could very well become the core of a solution.

Related Items:

Bright Skies, Black Boxes, and AI

Opening Up Black Boxes with Explainable AI

AI, You’ve Got Some Explaining To Do

Editor’s note: This article has been corrected. Zest AI developed an algorithm called generalized integrated gradients, not generalized integrated radiance. Datanami regrets the error.

 

 

 

 

Datanami