Follow Datanami:
February 10, 2020

Explainable AI: But Explainable to Whom?

Ellen Friedman

As the power of AI and machine learning have become widely recognized, and as people see the value that these approaches can bring to an increasingly data-heavy world, a new need has arisen: the need for explainable AI. How will people know the nature of the automated decisions that are made by machine learning models? How will they make use of the insights provided by AI-driven systems if they do not understand and trust the automated decisions that underlie them?

The biggest challenge to the next level of adoption of AI and machine learning is not the development of new algorithms, although of course that continues to be done. The biggest challenge is building confidence and trust in intelligent machine learning systems. Some call this need for confidence and trust a barrier for AI – and in a way it is – but I prefer to think of it as a very reasonable requirement of AI and machine learning. Explainable or interpretable AI involves the ability to present explanations for model-based decisions to humans. So explainable AI is of critical importance for the success of AI and ML systems. But explainable to whom?

Model-based decisions must be explainable to both technical and non-technical audiences. Who does that include? The people who build these systems are themselves among the technical audiences who need a clear explanation. As Patrick Hall and Navdeep Gill point out in their book An Introduction to Machine Learning Interpretability, Second Edition (published in 2019 by O’Reilly Media), “ … machine learning engineers will need more and better tools to debug these ever-more present decision making systems”.

Machine learning experts need convenient and reliable techniques and tools for understanding how automated systems are making decisions so that they can appropriately assess, tune, update models, remove bias from models and figure out how to build novel approaches. But they must also be able to explain AI-based decisions to others who may not have their specific technical expertise with modelling techniques. These audiences include people who are experts with regard to the data used to train models and others who are experts about the business processes that will rely on them. In addition, the users of systems that run on AI-based decisions also may need to know how and why decisions are made if they are to willingly make use of these systems. Hall and Gill go on to say that as explainability enables humans to understand how machine learning systems make decisions, that in turn “…can satisfy basic curiosity or lead to new types of data-driven insights”. Both are useful outcomes, especially when the audience includes your boss.

If you want to know more about an applied perspective on fairness, accountability, transparency and explainability of these systems, do read An Introduction to Machine Learning Interpretability, Second Edition courtesy of H2O.ai. Or dig even deeper with a free hands-on tutorial titled “Machine Learning Interpretability Tutorial”, one among a collection of AI tutorials. This one provides a step-by-step lesson in interpretability using state-of-the-art tools. Even if you just read through the background and overview of the steps in this tutorial, you’ll learn a lot about the basic concepts of interpretability including:

  • Response Function Complexity
  • Scope: Global versus Local Interpretability
  • Application Domain: Model-Agnostic vs Model-Specific

The tutorial also includes hands-on lessons for global Shapley values and feature importance, partial dependence plot, decision tree surrogates and more.  And you’ll find a link that lets you try a 21-day free trial with H2O Driverless AI, an automated machine learning platform.

Datanami