Follow Datanami:
June 9, 2020

IBM To Stop Selling Facial Recognition Technology

(Prostock-studio/Shutterstock)

In a nod to the national discussion on racial equality and law enforcement, IBM CEO Arvind Krishna yesterday announced that IBM will no longer sell facial recognition technology. He also called for greater transparency in use of body cameras worn by police and warned about the dangers of bias in AI systems used by law enforcement.

Krishna made his surprise announcement in a letter addressed to two U.S. senators and three members of the House of Representatives. A copy of the letter was posted to the IBM THINKPolicy blog.

“IBM no longer offers general purpose IBM facial recognition or analysis software,” Krishna wrote. “IBM firmly opposes and will not condone uses of any technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values and Principles of Trust and Transparency. We believe now is the time to begin a national dialogue on whether and how facial recognition technology should be employed by domestic law enforcement agencies.”

Facial recognition has already been banned in some cities, including San Francisco and Oakland, California. In 2019, the California Legislature passed, and Governor Gavin Newsom signed, the Body Camera Accountability Act, which bans facial recognition technology from being used on body cameras worn by police. A similar measure banning facial recognition on images from police body cams has been tucked into the police reform bill introduced Monday in the House of Representatives Monday.

Modern facial recognition technology uses a deep learning approach that leverages convolutional neural networks (CNNs) to detect minute patterns in images. When trained on certain standardized datasets, such as passport images, this sort of facial recognition technical can be very accurate. In fact, according to the National Institute of Standards and Technology’s (NIST) Face Recognition Vendor Test (FRVT), the top scores exceed 99.5% accuracy.

Deep neural nets have improved the accuracy of facial recognition (Evannovostro/Shutterstock)

However, the facial recognition technology is only as good as the dataset that was used to train it. And considering that many of these datasets lack ethnic diversity, that creates a problem when the technology is applied to people who don’t look like the images of people used in the training set.

In some cases, the accuracy of facial recognition technology on darker skinned individuals can be significantly lower than for lighter skinned individuals, who tend to be overrepresented in training data sets. According to the NIST, false positives for Asian and black faces was 100 times more likely than it is for white faces. That finding led the Department of Homeland Security to call for a temporary moratorium on the use of the technology in January.

Last year, a graduate student in the MIT Media Lab named Joy Buolamwini released the results of an experiment that found error rates for gender classification algorithms for darker-skinned women exceeded 34% while the maximum error rate for lighter-skinned males was less than 1%. The researcher used facial recognition technology from Microsoft, IBM, and China’s Face Plus Plus.

The debate over facial recognition in law enforcement reached a peak earlier this year, when the New York Times revealed that a company named Clearview AI had amassed a searchable database of more than 3 million pictures of people’s faces by scraping the Internet and social media sites like Facebook, Twitter, and LinkedIn, including Google. When the social media and Internet companies called for a cease-and-desist order against Clearview AI, the company, which offers its services to law enforcement organizations, claimed its practice of scraping the Internet was protected by the First Amendment.

Use of facial recognition on images taken from police body cams has come under scrutiny (John-Gomez/Shutterstock)

The debate over the use of facial recognition in law enforcement has been reawakened in the aftermath of the death of George Floyd at the hands of a Minneapolis police officer on May 25, 2020. Calls to “defund the police” and de-militarize law enforcement are gaining steam, and the use of body cams and facial recognition is part of that discussion.

“Artificial Intelligence is a powerful tool that can help law enforcement keep citizens safe,” IBM’s Krishna wrote. “But vendors and users of Al systems have a shared responsibility to ensure that Al is tested for bias, particularity [sic] when used in law enforcement, and that such bias testing is audited and reported.”

It’s unclear what impact IBM’s exit of the facial recognition software business will have on the industry as a whole. Advanced CNN-based facial recognition applications are readily available on the public clouds.

For example, Amazon Web Services offers Rekognition, a suite of tools for identifying objects, people, text, scenes, activities, and faces in video and still images.  Microsoft Azure offers Face, which customers can use to train algorithms on up to 1 million images of people’s faces supplied by customers. Google Cloud offers its Vision API, which can be used to train a deep learning system to detect people’s faces.

(On June 10, Amazon announced that it was banning law enforcement from using Rekognition for one year, and Microsoft followed suit on June 11.)

Enforcing a ban on facial recognition technology could be problematic. For example, would only the latest facial recognition technology based on CNNs be banned, or would it extend to older (and less accurate) forms of the technology? Would such a ban prevent the technology from being used to identify missing children?

According to Forrester analyst Kjell Carlsson, the facial recognition genie is out of the bottle, and it will be extremely difficult, if not impossible, to put it back in.

Arvind Krishna was appointed CEO of IBM in April 2020

“As a technology evangelist, I’m worried about it from the point of view of, Hey you’re going to stop all these folks from innovating and doing good things with these technologies as well as bad,” Carlsson told Datanami earlier this year. “But really the bigger concern for me is that, actually you’re not going to do any good in terms of preventing people from doing bad things with this.”

Instead of instituting a ban that would drive facial recognition technology underground and empower potentially bad actors to use it, a better approach would be to pave the way to encouraging ethical and responsible use of AI, which would keep big vendors with a lot of reputational risk in the game, Carlsson said.

Carlsson specifically cited Google’s approach to facial recognition technology as a good potential model to follow. Google has been a bit gun shy on providing facial recognition technology since a 2015 episode in which its product misidentified black people as gorillas. Since then, it hasn’t offered shrink-wrapped facial recognition services, but continues to offer facial recognition tools that developers can use.

“I think [Google] could do facial recognition better than others because so many of the unethical uses of AI are not intentional — it’s just people are not doing it well,” Carlsson said in March. It takes a certain amount of experience and understanding to build an ethical facial recognition system, and there are a lot of variables to consider, according to Carlsson, including how the model is architected, what training data is used, whether the chosen use case is suitable for the product, and how any disparities or challenges are mitigated.

Facial recognition is such a sensitive subject that Google has devoted a lengthy webpage to explaining its approach. “We’ve seen how useful the spectrum of face-related technologies can be for people and for society overall. It can make products safer and more secure…It can also be used for tremendous social good,” it writes. “But it’s important to develop these technologies the right way.”

Related Items:

Weighing the Impact of a Facial Recognition Ban

Facial Recognition in the Ethical Crosshairs

AI Ethics Still In Its Infancy

Editor’s note: This story was updated to reflect AWS’ June 10 announcement that it is banning law enforcement from using Rekognition for one year.  It was updated a second time to reflect Microsoft’s June 11 announcement that it, too, was banning law enforcement from using its facial recognition tech.

Datanami