Follow Datanami:
February 27, 2018

The GDPR: An Artificial Intelligence Killer?

Todd Wright and Mary Beth Ainsworth

(EtiAmmos/Shutterstock)

If anyone has doubts regarding the wave toward artificial intelligence, a quick read of this post from CMO.com should erase those doubts.

Consider some highlights:

  • In the immediate future, execs are looking for AI to alleviate repetitive, menial tasks, such as paperwork (82 percent), scheduling (79 percent) and timesheets (78 percent).
  • Eighty percent of executives believe AI boosts productivity.
  • By 2025, the artificial intelligence market will surpass $100 billion.

But with all the excitement regarding the potential of AI, there are concerns. One of the primary ones is how to address data privacy. In no other place is the concern between data privacy and artificial intelligence more pronounced than the European Union’s General Data Protection Regulation (GDPR).

The GDPR, adopted in April 2016 and taking effect this May, is the first change to EU privacy laws in 23 years. The Council of the European Union’s intention is to strengthen and unify data protection for all individuals within the EU. The GDPR aims primarily to give control back to EU residents over their personal data and how it gets processed, and regardless of whether you are based in an EU member nation or not, your organization is still required to adhere to the GDPR if you process the data of anyone residing in the EU.

While many of the data management aspects of the GDPR have organizations frantically working to meet the deadline, the area that covers how AI and GDPR coexist has many rethinking how they will market in the future.

Consider some of the aspects of the GDPR as it relates to the profiling and use of analytics on individuals:

  • Establishing informed consent (proof that the company has the person’s consent to process his/her personal data).
  • Ability to log and present to auditors (when required) details on the use of profiling (i.e., the use of personal characteristics or behavior patterns to make generalizations about a person).
  • Ability to withdraw consent from profiling algorithms.
  • Uncovering potential algorithmic biases.
  • Requirement that human judgment be involved in every profiling decision.

While all these GDPR rules are daunting to organizations that have used “traditional” analytics for years, the use of AI within the realm of profiling and analytics poses even more challenges. Some have even asked, “Is the GDPR an AI killer?”

Organizations have several common questions:

  • How can consent be managed within AI?
  • Is it possible to find profiling details within AI algorithms?
  • How can biases be found within AI algorithms, and can it be stopped?
  • How do you involve humans within AI when the very nature of AI is for machines to act and decide on their own?

Artificial intelligence is the science of training systems to emulate human tasks through learning and automation. With AI, machines can learn from experience, adjust to new inputs and accomplish specific tasks without manual intervention. The explosion in market hype around the term is closely tied to advances in deep learning and cognitive science, but AI spans a variety of algorithms and methods. An application doesn’t require the newest technologies to be considered AI.

AI systems extract insights from the data they are fed. And machine intelligence can’t take into consideration factors that exist outside of the data as it is presented. This means that the system is not going to magically comply with GDPR unless humans explicitly program AI systems to prompt, tag and associate consent actions as part of a data management framework. Likewise, algorithmic bias is a reflection of human bias threaded throughout the data that is presented to the machine. A machine will learn bias if the data holds bias; it cannot learn bias from interactions as humans do.

(Sergey Nivens/Shutterstock)

Reflecting on the profiling aspects of GDPR, how can businesses capitalize on the power of artificial intelligence while ensuring compliance with GDPR?

Organizational processes must reflect a standard for data governance to ensure that legal processes for obtaining consent are appropriately captured within an organization’s data management framework. This requires that people, processes and technology are all in sync with compliance requirements involving consent.

The ability to log and present profiling details to auditors can be addressed by mapping data lineage and clearly defining decision making criteria to ensure AI systems are designed to evaluate data based on organizational procedures.

Organizations must incorporate clear and succinct ways for people to revoke consent and tie that revocation to a series of actions within a data governance model. The action to revoke consent should link to data lineage associated with the initial consent to provide proof of GDPR compliance.

Uncovering potential algorithmic biases in AI systems requires a human to identify a bias that is present in the data being collected, but AI can also develop bias by not seeing factors that were excluded or missing from the data examined. To comply with GDPR, organizations using AI must provide a process to appeal automated decisions and request a human review of the data. For example, GDPR requires that if an algorithm is used to deny someone a loan, he/she must have the ability to appeal the denial and request a human review of the loan application.

Human involvement with profiling decisions can be tracked by authentication standards to access data, lineage to link authentication with the data accessed and capturing some final output through embedded processes that demand human action to make a given decision.

AI is built, implemented and refined by humans. Although the purpose of AI is to automate some tasks without the need for human intervention, AI requires substantial human influence to perform as expected. GDPR emphasizes the importance of that human influence and simply introduces a higher level of accountability for humans and the machines they use to make decisions.

So, is the GDPR an AI killer? The answer is a resounding no.

About the authors: Todd Wright leads Global Product Marketing for SAS Data Management solutions. He has 15 years of experience in data management software, including sales and marketing positions at DataFlux and SAS. Wright is instrumental in developing customer relationships and creating strategic marketing plans that drive awareness, consideration, education and demand for SAS Data Management and GDPR solutions.

Mary Beth Ainsworth is an AI and Language Analytics Strategist at SAS. She is responsible for the global SAS messaging of artificial intelligence and text analytics. Prior to SAS, she spent her career as an intelligence analyst and senior instructor in the US Department of Defense and Intelligence Community, primarily supporting expeditionary units and special operations.

Related Items:

GDPR: Say Goodbye to Big Data’s Wild West

Keeping Your Models on the Straight and Narrow

Datanami