Follow Datanami:
May 7, 2018

Ensuring Iron-Clad Algorithmic Accountability in the GDPR Era

James Kobielus

(ArchMan/Shutterstock)

Artificial intelligence’s “black boxes” are about to be blown wide open, whether or not the data science world is ready for it.

That’s because AI’s algorithmic models are in the bullseye of the European Union’s General Data Protection Regulation (GDPR), which takes effect on May 25. The EU designed GDPR to protect the privacy of European citizens, recognizing that the personally identifiable information (PII) that companies hold and process on customers’ behalf belongs to the individuals themselves. More to the point, those individuals have the right to control how their personal data is processed, whether that handling be done through algorithmic automation, manual methods, or some combination thereof.

With that deadline approaching, any enterprise that operates in the EU must fast-track their efforts to bring greater accountability to their machine learning (ML), deep learning (DL), and other AI-based applications. Failure to comply with GDPR’s strict requirements could result in companies that operate in the EU being subject to significant financial penalties, up to 4 percent of company revenues. Specifically, GDPR requires that companies:

  • proactively send privacy disclosures to PII subjects stating the algorithmic foundation of automated decision-making products that use data;
  • obtain specific, informed, and unambiguous consent from data subjects and enter into contracts with the relevant data controllers before algorithmically processing PII;
  • notify customers prior to changing the way they algorithmically process their PII;
  • comply with individual PII subjects’ rights to restrict algorithmic profiling and processing and to withdraw consent on uses of their data; and
  • enable PII subjects to query and introspect an easy-to-understand narrative and audit log of the features, data, and decision paths of the algorithmic decisions taken on their data.

If that sounds like a tall order for most enterprise data managers to address on GDPR day one, you’re not mistaken. As I discussed in a previous Datanami column, delivering a plain-English accounting of how most AI algorithms generate their predictions, classifications, and other inferences can be difficult, even for professional data scientists. And there is no shortage of initiatives to define frameworks and methods for ensuring that AI-driven processes are entirely accountable, a requirement is often synonymous with transparent, explainable, and interpretable.

(Evannovostr/Shutterstock)

What it all comes down to is identifying standard approaches for associating algorithmic outcomes with the specific input-data “features” that an AI model is designed to operate upon. This is central to understanding why an AI application might follow a specific decision path in particular circumstances. And it’s essential if your company is ever requested by an EU citizen to provide a full accounting of why an AI-driven application rejected their loan application, sent them a movie recommendation that they found offensive, or misclassified their photo as belonging to someone of the opposite gender.

Not surprisingly, many in the AI community are exploring the use of ML-driven approaches to bring greater accountability to how these data-driven algorithms connect features to outcomes. As the GDPR-compliance deadline approaches, data professionals should accelerate their exploration of the following approaches for ensuring AI accountability:

  • Feature introspection: This involves generating visualizations that either specify the precise narrative relationship of model features to decision paths or provide an exploratory tool for users to browse, query, and assess those relationships for themselves. Here’s research that provides blended visual and textual explanations of specific algorithmic decision paths.
  • Feature tagging: This involves applying comprehensive, consistent labeling in model feature sets. For example, this project focuses on algorithmic learning of semantic concepts in video feeds through ML applied to linked audio, visuals, and text). This drives the automated generation of metadata that might be used to explain how computer vision algorithms identify specific faces, genders, and other phenomena in video feeds.
  • Feature engineering: This involves developing models in high-level probabilistic programming languages in order to build interpretable features sets into the underlying algorithmic logic. Another approach might be to build more interpretable models from the outset, perhaps sacrificing some accuracy by incorporating fewer features but retaining those that are easier to roll up into higher-dimension features that are easier to tie to real-world narratives of the application domain. One way to do this might be to use variational auto-encoders that infer interpretable feature sets from higher-dimensional and more nuanced (but more opaque) representations of the same domain.

    (Vector Clip-Art/Shutterstock)

  • Feature experimentation: This involves using interactive approaches for assessing the sensitivity of model outcomes to various features. For example, the Local Interpretable Model-Agnostic Explanations research iteratively tests the sensitivity of algorithmic outcomes to the presence or absence of particular features in the data. In image input data, this can be done by algorithmically graying out specific pixels, feeding the resulting image back through an AI classifier model, and seeing if its AI-designated classification changed. For textual input data, here’s research that makes algorithmic modifications of the description of a specific data subject in order to determine whether that causes a natural language processing algorithm to classify them differently. However, as this research indicates, this experimental approach might not be effective when you’re making minute changes to interpretable neurons in an AI model, due to the fact that a well-engineered model may be able to make robust predictions in spite of such alterations.
  • Feature provenance: This involves logging every step of the AI development, training, and deployment pipeline. This ensure that there is always a complete audit trail documenting the provenance of every feature set, model, metadata, and other artifacts that contributed to particular algorithmic results, as well as of all validation and testing that may or may not have been performed on deployed models. This will lay bare the extent to which biases have been introduced into the AI pipeline that may have had adverse impacts on particular individuals or groups of PII subjects.

If you’re worried that you won’t have all of this in place by the end of this month, you’re not alone. As longtime industry expert Bernard Marr told me recently at DataWorks Summit in Berlin, “My sense is that there is a lot of catching up to do [on GDPR compliance]. I think people are scrambling to get ready at the moment. But nobody really knows what getting ready really means, I think there a lot of different interpretations. I’ve been talking to a few lawyers recently, and everybody has different interpretations of how they can push the boundaries.”

Many enterprises are still trying to put together the foundational capabilities for comprehensive GDPR compliance, with algorithmic accountability tools far from the only piece of the puzzle. The EU has only sketched out the broad GDPR mandate, but they’re leaving many of the implementation details to be worked out by national-level regulators in their ongoing discussions with industry and practitioners.

About the author: James Kobielus is SiliconANGLE Wikibon‘s lead analyst for Data Science, Deep Learning, and Application Development.

Related Items:

Blockchain Starting To Feel Its Way into the Artificial Intelligence Ecosystem

Developers Will Adopt Sophisticated AI Model Training Tools in 2018

Scrutinizing the Inscrutability of Deep Learning

Datanami