A video surveillance camera, to illustrate an article on facial recognition in shoplifting cases

Imagine coming home from a day at work only to find the police waiting to arrest you in front of your wife and young children on charges of felony theft at a store where you never shop.

That’s the Kafkaesque situation Robert Williams faced in early 2020 when facial recognition software used by Detroit police matched his old driver’s license photo to dark, grainy security footage from a year-old shoplifting incident at a Shinola watch store. The charges were quickly dismissed, and the police chief later apologized for what he acknowledged was a mistake.

More on facial recognition

The ubiquity of security cameras in stores, coupled with the adoption of less-than-reliable facial recognition technology for law enforcement, means that serious mistakes such as this could become an increasing concern for consumers. 

“A lot of people treat facial recognition and other AI [artificial intelligence] applications as magic and infallible black boxes,” says Justin Brookman, director of privacy and technology policy at Consumer Reports. “The technologies seem so sophisticated, but in reality, they’re far from completely accurate, and the results of a mistake can sometimes be devastating.”

How Facial Recognition Works

Facial recognition software, which matches a photograph to image data stored in a database, is widely used in law enforcement. The systems are sold by private companies that market their benefits to law enforcement as well as retailers and other businesses. Facial recognition technology is probably familiar from applications like Facebook, where it can help you tag friends in the photos that you post.

Facial recognition software can work well, but it in many situations it can also be very prone to error. Anil Jain, a professor in the computer science department at Michigan State University in East Lansing, explains that the technology is only as good as the images it’s fed.

Images taken in a controlled environment—like a driver’s license photo, where the subject is posing, face forward, close to the camera, and in good lighting—can allow for a reliable match.

But the images taken by a store’s security cameras are often far less reliable. A person of interest may be wearing a hat, looking away from the camera, positioned at a distance, and doing all of that in dim or uneven light. Those factors were all at play in the security camera footage that led to the arrest of Robert Williams, who this week filed a lawsuit (PDF) against Detroit police and the city. In short, subpar likenesses create a garbage-in, garbage-out situation that causes errors.

“The accuracy of the system is only as good as the worse of the two images,” Jain says.

Less Accuracy With Black People

Additionally, many facial recognition systems are especially unreliable when it comes to identifying people of color like Robert Williams.

A series of studies led by Joy Buolamwini, a computer scientist and founder of the Algorithmic Justice League, a digital advocacy group based in Cambridge, Mass., demonstrated that facial recognition technology is significantly less accurate when identifying women and people of color. In a 2019 study by the American Civil Liberties Union, Amazon’s Rekognition software incorrectly matched 27 Black professional athletes to mugshots in a criminal database. Other studies have reached similar conclusions.

One reason is that the datasets used for training facial recognition software tend to be heavily skewed toward by images of white people. The result is that Black people are at particular risk for being wrongly identified from surveillance video of crimes. The chief prosecutor of Detroit’s Wayne County, Kym Worthy, said in a statement about the case that she had resisted the adoption of facial recognition technology citing “studies regarding the unreliability of the software, especially as it relates to people of color.” 

Regardless of who is misidentified, the problem seems likely to grow. A 2019 report suggests there are 70 million security cameras installed in the U.S., the vast majority of them in retail locations. All of that footage is now becoming problematic because of the new technology, according to Jay Stanley, a senior policy analyst with the ACLU. “With facial recognition, the camera’s eyes are no longer dumb,” he says.

It is now much easier to use surveillance video to quickly home in on a suspect. Police no longer have to flip through mug books to find someone who resembles someone in a CCTV video. They simply run the security footage through an automated facial recognition database. 

And it’s very likely that your face, like Robert Williams’, is in one of those databases. According to a 2016 study by Georgetown Law, more than 117 million Americans were in some kind of law enforcement facial recognition database. That was roughly half of all adults in the U.S., and privacy experts agree that the number is far larger now. Some facial recognition databases contain billions of images. 

While those databases include mug shots, consumers who never had any run-ins with the law—Robert Williams had no criminal record—can end up in this virtual lineup without knowing it. The databases include driver’s license and state ID photos, along with images posted on social media and elsewhere online. 

What's Next for Facial Recognition

Facial recognition technology works pretty well for tagging photos in social media—and when it doesn’t, the harms are contained. But things are different when it comes to law enforcement.

The Detroit Police Department has safeguards in place that could have kept Robert Williams from being arrested. But, according to both the lawsuit and the city’s statement, detectives placed so much faith in the technology—social scientists call this “automation bias”—that they bypassed the department’s procedures. A detective obtained a warrant for Williams’ arrest based on identification by a security guard who wasn’t in the store when the crime occurred. 

There has been pushback against the use of facial recognition by police. San Francisco and Somerville, Mass., are among several cities that have placed restrictions or outright bans on the use of facial recognition software by law enforcement, and at least 26 states have considered legislation that would do the same. 

Additionally, some of the country’s best-known technology firms have slowed their rollout of facial recognition services. “A lot of smart people have called for a moratorium on the use of facial recognition until the accuracy and disparate impact issues are worked out,” says CR’s Brookman. “Amazon, for instance, agreed to a one-year moratorium on police use of their facial recognition technologies a year ago—but that one year is just about up, and it’s not like all the problems have been fixed.”

Healthy skepticism of the tech could become even more important in the future as facial recognition becomes cheaper and retailers increasingly install the technology on site to alert them to both VIP customers and potential shoplifters. A number of retailers in the U.S. have at least experimented with facial recognition technology for security. It’s also used in casinos and at certain public events, here and overseas.

“We don’t know how many stores are using facial recognition and how they’re using it,” the ACLU’s Stanley says. “Retailers want to watch their customers, while making them feel like they’re not being watched.”