Follow Datanami:
October 28, 2019

Do You Trust and Understand Your Predictive Models?

Patrick Hall and Navdeep Gill

Understanding and trusting models and their results is a hallmark of good science. Analysts, engineers, physicians, researchers, scientists, and humans in general have the need to understand and trust models and modeling results that affect our work and our lives. For decades, choosing a model that was transparent to human practitioners or consumers often meant choosing straightforward data sources and simpler model forms such as linear models, single decision trees, or business rule systems. Although these simpler approaches were often the correct choice, and still are today, they can fail in real-world scenarios when the underlying modeled phenomena are nonlinear, rare or faint, or highly specific to certain individuals.

Today, the trade-off between the accuracy and interpretability of predictive models has been broken (and maybe it never really existed). The tools now exist to build accurate and sophisticated modeling systems based on heterogeneous data and machine learning algorithms and to enable human understanding and trust in these complex systems. In short, you can now have your accuracy and interpretability cake…and eat it too.

To help practitioners make the most of recent and disruptive breakthroughs in debugging, explainability, fairness, and interpretability techniques for machine learning read “An Introduction to Machine Learning Intrepretability Second Edition”. This report defines key terms, introduces the human and commercial motivations for the techniques, and discusses predictive modeling and machine learning from an applied perspective, focusing on the common challenges of business adoption, internal model documentation, governance, validation requirements, and external regulatory mandates. We’ll also discuss an applied taxonomy for debugging, explainability, fairness, and interpretability techniques and outline the broad set of available software tools for using these methods. Some general limitations and testing approaches for the outlined techniques are addressed, and finally, a set of open source code examples is presented.

In general, the widespread acceptance of machine learning interpretability techniques will be one of the most important factors in the increasing adoption of machine learning and artificial intelligence in commercial applications and in our day-to-day lives. Hopefully, this report convinces you that interpretable machine learning is technologically feasible. Now, let’s put these approaches into practice, leave the ethical and technical concerns of black-box machine learning in the past, and move on to a future of FATML and XAI.

Again if you’d to learn how to make the most of recent and disruptive breakthroughs in debugging, explainability, fairness, and interpretability techniques for machine learning read “An Introduction to Machine Learning Intrepretability Second Edition” where you’ll find:

  • Definitions and examples
  • Social and Commercial Motivations for Machine Learning
  • A Machine Learning Interpretability Taxonomy for Applied Practitioners
  • Common Interpretability Techniques
  • Limitations and Precautions
  • Testing Interpretability and Fairness
  • Machine Learning Interpretability in Action
Datanami