Follow Datanami:
August 2, 2019

AI Ethics Still In Its Infancy

(hafakot/Shutterstock)

AI is having a moment in the sun. There’s no doubt about that. But despite the trillions in value that AI is expected to bring over the coming years thanks to widespread automation of repetitive tasks through emergent tech like neural networks, there’s a major issue dogging AI: the rules of road for what’s ethical and what’s not are vague, and that’s a problem.

Vic Katyal, who heads up risk and security as a principal at Deloitte Consulting, advises large companies on how to think about AI and deal with ethical issues that arise. Earlier this year, Katyal noted that 40 to 50 publicly traded companies have made SEC filings disclosing AI-related risks.

“Obviously they have concerns in the sense that there’s a proliferation of AI happening in the enterprise and who’s watching over it,” Katyal said. “Those are topics emerging. But in terms of action-taking, I would say it’s still very much in its infancy.”

There’s a land grab currently underway as companies seek to differentiate their products and services with AI. Most of the big companies that Katyal advises are adopting AI, but many are doing so quite cautiously.

“It’s the Wild West when it comes to AI,” he said. “It’s no longer the pain of the companies out in Silicon Valley. Everybody is [concerned about it] because there’s a fear of what’s in the black box.”

AI Abuses

You don’t have to look far to find examples of AI abuses. Al Jazeera, the Qatari-based news organization, recently ran a series called “All Hail the Algorithm,” in which it documents alleged cases of AI run amok.

(Wit Olszewski/Shutterstock)

“Algorithms are just as much a part of our modern infrastructure as buildings and roads,” said Al Jazeera journalist Ali Rae. “So getting to grips with how they work and to whose benefit is really important if we want to understand the world we live in today.”

Rae’s first piece documents the “robodebt” case that came out of Australia. For years, the Australian’s Department of Human Services used humans in the loop to flag potential anomalous payouts under its welfare system, dubbed Centrelink. When the agency put those decisions under the full control of a data-matching algorithm called Online Compliance Intervention in 2016, the number of cases flagged surged by more than 50 times.

The robodebt issue thrust the Australian government into the position of explaining how OCI comes to its conclusions. The government has thus far struggled to come up with answers, which has given fuel to critics who are calling for the government to dump the program.

“What happens to trust in a world driven by algorithms?” Rae asked. “As more and more decisions are made for us by these complex pieces of code, the question that comes up is inevitable: Can we trust our algorithms?”

Emerging Ethics

There are two sides to the hype surrounding AI. One side is marked by an enthusiastic embrace of the great potential to automate repetitive tasks and decision-making. The other side is marked by fear of being unfairly judged by algorithms and of losing jobs.

(BeeBright/Shutterstock)

Beena Ammanath, who was previously the CTO of AI for HPE and is the founder and CEO of Humans for AI, says fear is helping to drive customers to inquire about issues of ethics in AI.

“There’s that fear about what should we be doing, and how do we think about this?” Ammanath told Datanami at HPE’s conference in June. “Ethics is one of those complex topics that I don’t think anybody has figured it out. But it’s something that will be very context- and industry-specific.”

Organizations will typically address ethics in one of two ways, Ammanath says. Some companies create ethics advisory board composed of internal and external experts from academia and industry to hash out their AI ethics plans. The other approach is to appoint a chief ethics officer to drive discussion from the C suite. HPE advocates the former approach, she said.

While Fortune 500 companies are talking about the ethics of AI, it’s still very early days for taking any kind of action, Ammanath said.

“We’re still very early on,” she said. “Fundamentally we need to drive more education. We need to be able to educate our policy makers, our domain experts. How many healthcare AI startups are out there that don’t have anybody with a medical background? But it’s a healthcare AI product that’s being put out there. It’s raising that level of awareness in the next five years.”

One of the big challenges facing AI ethics is the transparency of the algorithms, which is a snag that the Australian government ran into with its welfare check system. According to Ammanath, it may turn out that full transparency is incompatible with the latest AI techniques.

“There are quite a few startups working on transparent AI. But given the nature of neural nets and how that whole thing works, I don’t know how transparent you can make it,” she said. “We probably won’t be able to leverage the technique if we make it more transparent.”

Data Horses and Ethical Carts

Eventually a common ethics framework will emerge, and it could even lead to new laws, experts agree. But without a national data regulation – which the United States so far has failed to enact — it’s all moot.

(Pe3k/Shutterstock)

“I think the hype around AI is going to help us drive better data regulations, because data by itself seems so harmless that it’s not getting the kind of attention it needed,” Ammanath said. “With AI, just because there’s so much hype, I think it will drive what we need in the data space first. As you said, it’s fundamental.”

Deloitte’s Katyal agreed that AI regulations could be in the offing, but maintains that we’re still years away. “I think there is a desire by many to ask for some level of parameters or guidelines, whether it’s a law or rule of some kind,” he said.

However, a standard data regulation must come before any AI regulation, he said. “I would see more things happening around the data governance side of things,” he said.

Katyal advises companies to get their arms around their data infrastructure and start to establish governance and control over it. Companies should also demonstrate full control over their algorithms, whether there’s a law that says they should do it or not.

“This is good practice,” he said. “While there may not be regulatory push to do it, it’s the right thing to do. It’s a reputational risk, so you have to do it.”

Related Items:

AI Ethics and Data Governance: A Virtuous Cycle

Giving DevOps Teeth To Crunch Down on AI Ethics Governance

EU Ethics Rules Seek to Balance AI Risks, Benefits

Datanami