Follow Datanami:
October 13, 2020

AI Governance Rises to the Top of the Stack

James Kobielus

(sdecoret/Shutterstock)

Artificial intelligence (AI) is running amok, or so that’s the general perception these days. AI governance is important because the stakes are so high for getting AI right and consequences so dire if we screw it up.

Governance must be approached from a risk management perspective. AI’s principal risk factors are in the following areas:

  • Can we prevent AI from invading people’s privacy?
  • Can we eliminate socioeconomic biases that may be baked into AI-driven applications?
  • Can we ensure that AI-driven processes are entirely transparent, explicable, and interpretable to average humans?
  • Can we engineer AI algorithms so that there’s always a clear indication of human accountability, responsibility, and liability for their algorithmic outcomes?
  • Can we build ethical and moral principles into AI algorithms so that they weigh the full set of human considerations into decisions that may have life-or-death consequences?
  • Can we automatically align AI applications with stakeholder values, or at least build in the ability to compromise in exceptional cases, thereby preventing the emergence of rogue bots in autonomous decision-making scenarios?
  • Can we throttle AI-driven decision making in circumstances where the uncertainty is too great to justify autonomous actions?
  • Can we institute failsafe procedure so that humans may take back control when automated AI applications reach the limits of their competency?
  • Can we ensure that AI-driven applications behave in consistent, predictable patterns, free from unintended side effects, even when they are required to dynamically adapt to changing circumstances?

    (Michael-Traitov/Shutterstock)

  • Can we protect AI applications from adversarial attacks that are designed to exploit vulnerabilities in their underlying statistical algorithms?
  • Can we design AI algorithms that fail gracefully, rather than catastrophically, when the environment data departs significantly from circumstances for which they were trained?

AI Governance as Software-Based Pipeline Automation

AI governance is toothless if the appropriate controls aren’t automated within software-based development and operations (DevOps) pipelines.

AI apps are built, trained, deployed, and governed by teams of data scientists, data engineers, data stewards, and others within complex workflows. To address this requirement, many organizations are building AI governance controls into their MLOps processes. At heart, this requires policy-driven automation of the processes that manage data, statistical models, metadata, and other artifacts used to build, train, and deploy AI applications. It also requires tools for monitoring the usage, behavior, and outcomes of AI apps throughout their lifecycles.

The quintessential risk of any AI-driven process is not knowing whether we can trust a deployed statistical model to do its assigned task accurately and reliably. If a statistical model’s predictive fitness decays to the point where it cannot do its assigned tasks—such as recognizing faces, understanding human speech, or predicting customer behavior—at sufficient level of accuracy, it’s essentially useless to the enterprise that built and deployed it.

Consequently, the core function of AI governance is model assurance. This is the ability to determine whether an AI application’s machine learning models remain predictively fit for their assigned tasks and, if they aren’t, to put them back on the straight and narrow.

However, predictive accuracy can be a tough performance metric to guarantee. AI’s statistical models—typically implemented as artificial neural networks—may be so complex and arcane that they obscure how they actually drive automated inferencing. Just as worrisome, statistically-grounded applications may inadvertently obfuscate responsibility for any biases and other adverse consequences that their automated decisions may produce. In addition, these probabilistic models may be evaluated and retrained infrequently, leading to situations where what once was fit for a specific purpose has now lost its predictive ability.

Embedding Of AI Model Assurance In MLOps Platforms

Enterprises that have bet the business on AI-powered processes must consider whether to acquire model assurance as an embedded feature of their MLOps platforms or from startup vendors that focus on this exciting niche.

Fortunately, there is a growing range of data science DevOps environments that offer robust model assurance. The latest generation of these tools leverages cloud-native infrastructure to deploy and manage a steady stream of AI models and code builds all the way to the edge. Chief among the commercial offerings are:

  • Google Cloud AI Platform offers such model quality assurance features as continuous evaluation, which lets data scientists compare model predictions with ground truth labels to gain continual feedback and optimize model accuracy.
  • H2O.ai Driverless AI offers a deep bench of model quality assurance features. It support analysis of whether a model produces disparate adverse outcomes for various demographic groups even if it wasn’t designed with that outcome in mind. It can automate monitoring of deployed models for predictive decay; benchmarking of alternative models for A/B testing; and alerting of system administrators when models need to be recalibrated, retrained, and otherwise maintained to keep them production-ready.
  • Microsoft Azure Machine Learning MLOps can notify and send alerts on events in the ML lifecycle, such as experiment completion, model registration, model deployment, and data drift detection. It can monitor machine learning applications for model-specific metrics and provide monitoring and alerts on your machine learning infrastructure. And it can automate retraining, updating, and redeployment of models based on new data and other operational and business factors.
  • Amazon SageMaker Model Monitor continuously monitors machine learning models in production in the AWS Sagemaker cloud service, detects deviations such as data drift that can degrade model performance over time, and alerts users to take remedial actions, such as auditing or retraining models. Monitoring jobs can be scheduled to run at a regular cadence, can push summary metrics to Amazon CloudWatch to set alerts and triggers for corrective actions, and support a broad range of instance types supported in Amazon SageMaker.
  • Superwise’s AI Assurance provides a real-time platform for monitoring and maintaining the accuracy of deployed AI models. It enables stakeholders to catch model decay and other issues with deployed AI models before they can have a negative business impact. It flags model inaccuracies that stem from changes in the data that feeds AI models. It can also catch inaccuracies associated with changes in the business environments into which the models were deployed. It provides proactive recommendations for data science teams to take manual action to keep models accurate, unbiased, and otherwise fit for purpose. It can also automatically execute some corrective actions to keep models from drifting into potentially suboptimal territory.

Takeaway

Though I think that AI isn’t the public menace it’s been made out to be, the reality is that we’re likely to see more jurisdictions tight  the regulatory screws on this technology.

As we move more deeply into the ’20s, AI applications will be the most disruptive in the best and worst senses of the word. If allowed to proliferate unmonitored and uncontrolled, AI model inaccuracies can wreak havoc on society. Some of AI’s risks stem from design limitations in a specific buildout of the technology.  Others may be due to inadequate runtime governance over live AI apps. Still others may be intrinsic to the technology’s inscrutable “blackbox” complexity of the machine learning, deep learning, and other statistical models upon which AI depends.

To mitigate these risks, society will increasingly demand automated governance of these models’ performance in every deployment scenario.

About the author: James Kobielus is an industry veteran who has written extensively about big data, AI, and enterprise software. James previously was associated with Futurum Research, SiliconANGLE Wikibon, IBM, and Forrester Research. Currently, he is an independent tech industry analyst based in Alexandria, Virginia.

Related Items:

Data Privacy Is on the Defensive During the Coronavirus Panic

Giving DevOps Teeth To Crunch Down on AI Ethics Governance

Let’s Accept That AI Leadership Is Everywhere

Datanami