Should facial recognition technology be banned? That question is getting renewed interest in the wake of revelations about Clearview AI’s use and potential abuse of the technology. But the debate over banning facial recognition is not as clear cut as it may appear—and banning the tech could make the situation worse, a technology expert tells Datanami.
There are risks associated with facial recognition, just as there are risks with the unrestrained or unethical use of any technology. But in the battle over the privacy rights of individuals, nothing strikes fear into the hearts of people quite like the specter of their faces being used without their consent and the sense of permanent loss of anonymity that comes with it.
The distress was palpable following the New York Times’ January 8 story about how a little-known startup named Clearview AI has amassed a searchable database of more than 3 billion images by scraping publicly available imagery data on the Web.
Clearview AI ostensibly is designed to enable law enforcement to identify individuals by leveraging the vast trove of facial imagery available on Facebook, Google, Twitter, LinkedIn, and other sources. Essentially, if the police have an image of somebody’s face, they can leverage Clearview AI’s large database of imagery data and its neural network facial recognition system to put a name to that face.
There is little doubt the technology works. The Times reported how the Indiana State Police had a picture of a suspect in a crime, but came up empty when using traditional sources of facial recognition, including databases of driver’s licenses photos and booking photos.
Clearview AI CEO Hoan Ton-That tells CBS News scraping the Web for publicly available information is a constitutional right protected by the First Amendment (Image courtesy CBS News)
But thanks to Clearview AI’s capability to essentially scour the entire Web for matches, the police found the man’s identity in just 20 minutes, and reportedly became Clearview AI’s first paying customer. It now claims to have 600 paying customers.
A Clearview Backlash
However, there are questions over who actually has accesses to Clearview AI’s technology. Companies that aren’t law enforcement organizations, like Walmart, Macy’s, and the NBA have appeared on the company’s list of clients, according to TechCrunch. And then there’s the Times’ report of how a New York billionaire used the technology to find the identity of the man his daughter was dating (it turned out to be a San Francisco venture capitalist).
The backlash against Clearview AI was swift. Tech giants like Facebook and Google filed cease and desist orders against the company, and the attorney general of New Jersey called for a moratorium on the use of Clearview AI in his state.
The Department of Homeland Security also called for a temporary moratorium on use of the technology, saying the potential for abuse is just too high. Among the reasons that DHS cited for halting the use of facial recognition is a recent study from the National Institute of Science and Technology that concluded that false positives for Asian and black faces was 100 times more likely than for white faces.
Bans on facial recognition technology are already on the books for local governments, including San Francisco and Oakland in California and Somerville, Massachusetts. Vermont Senator Bernie Sanders has also made banning facial recognition a part of his platform to be the Democratic candidate for president. (The fact that one of Clearview AI co-founders, Richard Schwartz, was an advisor to Rudy Giuliani and received $200,000 in seed funding from Palantir’s controversial CEO and founder Peter Thiel has also thrust the company into the political sphere.)
The Trouble with Bans
Clearly, there are legitimate concerns about how facial recognition technology should be used in open societies around the world. For various reasons, an outright ban on the technology is probably not feasible, according to Forrester analyst Kjell Carlsson.
Facial recognition technology is under fire (ImageFlow/Shutterstock)
For starters, how would one differentiate between traditional facial recognition technology used for the past 20 years with the newer, more accurate AI methods based on convolutional neural networks? “You’ll have a hell of a time going in and trying to split out what we’ve already allowed them to do and they’ve been doing for decades, relative to what they can do going forward,” Carlsson says.
There’s also a sense of throwing out the baby out with the bathwater, Carlsson says. “There’s a big difference in use cases between using facial recognition for helping identify missing children relative to using facial recognition for potentially identifying a suspect versus using facial recognition for scouring the Web and finding out arbitrary pieces of information about anyone,” he says.
In Carlsson’s view, it would be pointless to pretend that the technology doesn’t exist. The facial recognition genie is out of the bottle, so to speak, and it’s not going back in. A better approach, he says, would be to establish guidelines that describe and define what is ethical and legal use of facial recognition—or any form of artificial intelligence, for that matter—and what is not ethical and legal.
“As a technology evangelist, I’m worried about it from the point of view of, Hey you’re going to stop all these folks from innovating and doing good things with these technologies as well as bad,” he says. “But really the bigger concern for me is that, actually you’re not going to do any good in terms of preventing people from doing bad things with this. Arguably, you’re going to enable folks to do just as many bad things as they were doing before, and prevent folks who would probably do a better job of it.”
The Importance of Reputational Risk
Carlsson wishes that a capable tech firm like Google would take the lead on facial recognition. Of course, Google’s reputation took a big hit in 2015 when it released facial recognition technology that subsequently was found to misidentify black people as gorillas. The company immediately took the technology offline and hasn’t offered it again since, except for a product that identifies celebrities on the Web, but it has tight guardrails around that, Carlsson says.
“I think [Google] could do facial recognition better than others because so many of the unethical uses of AI are not intentional–it’s just people are not doing it well,” Carlsson tells Datanami. “They’re not testing what they’re doing. They’re not investigating to understand whether the training data is representative of the data that the individuals its going to be used on. They’re not going in and taking a look at the underlying architecture of the models and whether or not it’s suitable to the use. Or they just don’t have experience with all of the different things that could go wrong and are mitigating that.”
The State of California is banning the use of facial recognition with police body cam footage (Image source: Axon)
Google would also put its considerable reputation on the line with something like facial recognition, which provides another counter against abuse (intentional or not). The 2015 misstep shows that Google was responsive to ethics concerns, but it also likely makes the company a bit gun shy with the politically volatile technology.
At the other end of the spectrum is a company like Clearview AI, which has little reputation to risk, and is thus more free to work with the technology with fewer encumbrances.
“So far they’re the worst of the different vendors I’ve seen in the security space in terms of an approach that is arguably unethical for what it’s being used for,” Carlsson says. “It is vulnerable from a data point of view. It is bad from the point of view of traditional data science metrics in terms of performance. So it sort of hit the trifecta of everything you don’t want to see.”
Human Decisions, Not Technology
Every technology has the potential to be abused, and some clearly should not be in the public realm. There’s little to gain from enabling thermonuclear devices at the household level, for example.
Trying to ban facial recognition technology would backfire, Carlsson says, because it would encourage more companies like Clearview AI while hamstringing the ability of reputable firms like Google to pursue ethical uses of the technology.
Facial recognition tech is under fire (Ivan-Marc/Shutterstock)
For example, the company Axon is one of the most well-known manufacturer of body cameras worn by law enforcement officers. However, because the of the controversy over facial recognition–not to mention the State of California’s ban on facial recognition with police body cams footage–The company has stated that it will not provide the technology with its camera platforms. That’s just an invitation for trouble, Carlsson says.
“We know that all these body cams are going to be used with facial recognition models, but they’re not going to come from Axon. They’re going to come from a third-party vendor. They’re certainly not coming from Google now,” he says. “As more and more companies are going to stay away from this because of the reputational risk, and instead, the sum of this is you get more folks like Clearview who can get away with doing something on this because nobody knows them, and they don’t have any reputational risk.”
When you consider that the potential upside of facial recognition and the likelihood of a ban driving facial recognition technology underground and opening the door for use by unethical players, the better option is to encourage ethical and legal uses of the technology, Carlsson says.
“At the end of the day you have to make an ethical choice whether it should or shouldn’t be done. That’s going to be a human choice. It’s not a technology choice,” Carlsson says. “It’s better to draw the boundaries on a use case basis, not necessarily on a data and a technology basis.”
Facial Recognition in the Ethical Crosshairs
AI Ethics Still In Its Infancy
AI Ethics and Data Governance: A Virtuous Cycle