Illustration of an IBM logo with the outline of a person's face embedded

IBM has shut down its facial recognition business, according to CEO Arvind Krishna, citing concerns over how the technology can exacerbate racial profiling and lead to excess surveillance by law enforcement.

The technology played no role in George Floyd’s killing, but it has become a flash point in the heated debate over racial justice in America, raising concerns about tech companies that provide police officers with digital surveillance tools. 

In the past year, Amazon and Clearview AI—whose facial recognition app was the subject of a New York Times investigation and a recent lawsuit in Illinois—have drawn criticism for their facial recognition products.

More on Digital Privacy

Research has shown that the technology may have an unfair impact on communities of color because it tends to be less accurate in identifying the faces of people—particularly women—with darker skin.

A 2019 paper (PDF) by Massachusetts Institute of Technology researcher Joy Buolamwini and Microsoft researcher Timnit Gebru found that IBM facial recognition software had an error rate of almost 35 percent when identifying darker-skinned females, compared with an error rate of less than 1 percent for lighter-skinned males. A follow-up study (PDF) from the Department of Commerce’s National Institute of Standards and Technology confirmed the findings.

“IBM no longer offers general purpose IBM facial recognition or analysis software,” Krishna announced Monday night in a public letter calling on Congress to pursue policies that promote justice and racial equity. “We believe now is the time to begin a national dialogue on whether and how facial recognition technology should be employed by domestic law enforcement agencies.”

An IBM spokesperson told Consumer Reports that while the company will no longer develop, update, or sell facial recognition products, it may continue to work on visual detection software that recognizes objects but not people.

In a statement, Buolamwini commended IBM for leading the tech industry in pulling out of the facial recognition business. “We encourage other tech companies to follow suit,” she wrote.

The Regulation Debate

In an essay published last week, Buolamwini cited the many agencies—local, state, and federal—that “are deploying a wide range of surveillance technologies to collect, share, and analyze information about protesters.” The list included the Drug Enforcement Administration, Customs and Border Protection, Immigration and Customs Enforcement, and various U.S. military forces.

There are few laws that protect privacy in the U.S., and even fewer that specifically address facial recognition. Congressional Democrats introduced a bill Monday called the Justice in Policing Act, which includes a provision that would ban police use of facial recognition software to scan body camera footage without a warrant.

The Illinois Biometric Information Privacy Act has led to a number of high-profile lawsuits in recent years, including a case against Facebook’s facial recognition technology that generated a $550 million settlement in January.

In March, Washington state passed legislation that curbs law enforcement use of the technology. Microsoft, which had lobbied in support of the law, celebrated its passage as a victory for consumers, though many privacy experts say the law doesn’t go far enough.

Even where state and local governments have taken up facial recognition regulations, most limit protections to the use of the technology by law enforcement. That leaves the vast majority of American consumers with little recourse if they aren’t comfortable with having their faces tracked in Facebook photos or retail experiments.

IBM’s facial recognition business trailed behind that of its competitors, and some argue that the decision to shut down the company’s ties to the technology are little more than a smart public relations move.

“While questions can be raised about how much financial sacrifice IBM is really making, we shouldn’t lose sight of a clear, positive outcome: increasing the pressure on others to do better right away,” says Evan Selinger, a senior fellow at the Future of Privacy Forum, who studies facial recognition as a philosophy professor at the Rochester Institute of Technology.

The Big Picture

Despite the exit from IBM and the heightened scrutiny, facial recognition remains a thriving business.

Amazon’s Rekognition software is used by state and local police departments as well as federal agencies such as ICE. Clearview AI recently pivoted to work exclusively with government entities.

Microsoft’s stance is more complicated. The company has refused to sell its technology for law enforcement purposes on at least one occasion, and it’s a loud proponent of facial recognition regulation. Microsoft does sell other technologies and services to law enforcement, though. A group of 250 Microsoft employees reportedly signed an internal letter a few days ago calling on the company to end police contracts and endorse the recent protests.

Like Microsoft, Amazon has endorsed facial recognition regulation, though with more reservations. Facial recognition is “a perfect example of something that has really positive uses, so you don’t want to put the brakes on it,” CEO Jeff Bezos told reporters at an event last September. “At the same time, there’s lots of potential for abuses with that kind of technology, and so you do want regulations.”

Amazon and Clearview AI did not return CR’s requests for comment. Microsoft did not provide details about any planned changes to its business practices, but a spokesperson highlighted blog posts about the company’s position on facial recognition.

“Although IBM’s move is a good one, private companies are still really the ones deciding how this tech will be used,” says Katie McInnis, a policy counsel at Consumer Reports. “These decisions are also based on companies’ own definitions of what violates ‘basic human rights and freedoms,’ something that the tech industry has not been great at in the past.”

“I’m grateful that companies are looking at this issue, but biometrics is a dangerous class of technology, and it needs guardrails,” says Pam Dixon, executive director of the advocacy group World Privacy Forum. “We need to have rules about technologies that perform differently for people of different skin tones.”