Follow Datanami:
April 12, 2019

EU Ethics Rules Seek to Balance AI Risks, Benefits

via Shutterstock

European regulators continue to take the lead on a range of critical technology policy issues spanning data privacy and, now, “trustworthy” AI.

On the heels of its sweeping General Data Protection Regulation, considered by at least one observer as “the most significant change in privacy law in decades,” the European Union this week unveiled ethics guidelines for “building trust in human-centric AI.”

First and foremost, the EU framework emphasizes human oversight of AI development: Emerging systems should serve humans, not the other way around. “Proper oversight mechanisms need to be ensured, which can be achieved through human-in-the-loop, human-on-the-loop and human-in-command approaches,” regulators said.

Along with emphasizing consumer privacy and data governance, the AI guidelines also stress the need for thoroughly vetted algorithms undergirded by robust and rigorously tested software stacks. AI systems “need to be safe, ensuring a fall back plan in case something goes wrong, as well as being accurate, reliable and reproducible” the EU panel said.

Hence, the European guidelines stake out a position at odds with so-called “black box” AI platforms unable to explain, for example, how models in a deep learning system arrive at a conclusion or prediction.

The European Commission established a working group last June to begin work on its ethical AI guidelines. The first draft released in December attracted more than 500 comments from AI stakeholders.

The guidelines released this week also cover key areas of concern, including algorithmic bias, accountability and the need for transparent AI systems and business models. “AI systems and their decisions should be explained in a manner adapted to the stakeholder concerned,” the guidelines state.

“Humans need to be aware that they are interacting with an AI system, and must be informed of the system’s capabilities and limitations.”

The EU guidelines are sure to fuel the ongoing debate about how far regulators should go in controlling AI technology without stifling innovation. Still, a consensus appears to be emerging to retain so form of human control over AI as opposition grows to deployment of autonomous platforms.

“The development of AI is not a means in itself,” asserted Pekka Ala-Pietilä, a member of the European Commission’s AI High-Level Group.  “The goal is to increase human well-being, which necessitates a holistic approach, ensuring that we maximize the benefits of AI, while at the same time minimizing its risks.”

EU regulators will next launch a “piloting process” this summer to hammer out best practices for implementing the European regulator’s version of trustworthy AI. A senior-level expert group will then review proposals before the European Commission issues formal recommendations, perhaps as early as next year.

The EU’s AI guidelines can be downloaded here.

Recent items:

Opening Up Black Boxes with Explainable AI

The GDPR: An Artificial Intelligence Killer?