Defund Facial Recognition

I’m a second-generation Black activist, and I’m tired of being spied on by the police.

An illustration shows two abstract faces overlaid by the spiderweb-like nodes of a facial-recognition system.
Adam Maida

Ahmaud Arbery. Breonna Taylor. Tony McDade. George Floyd. Rayshard Brooks. Oluwatoyin Salau. Robert Forbes. As each story has emerged of a Black life violently ended by law enforcement, white nationalists, or other forms of interpersonal violence, a multiracial movement for Black lives, led by Black activists, has kept pace. What has also kept pace are the disturbing and highly advanced police technologies used to spy on these activists. My mother survived the surveillance of the FBI’s counterintelligence program as a civil-rights activist in the 1960s. As a second-generation Black activist, I’m tired of being spied on by the police.

In June, in the midst of a mushrooming protest movement against increasingly visible police killings of Black people and a simultaneously exploding coronavirus pandemic that is taking Black lives at a disproportionate rate, IBM made the surprising announcement that it would stop selling, researching, or developing facial-recognition services. Amazon and Microsoft followed with their own announcements that they would not sell facial-recognition services or products to state and local police departments, pending federal regulation. As activists have emphasized the long-standing demand to defund police with a more modern call for technology companies to cut ties with law-enforcement agencies, facial-recognition companies face a come-to-Jesus moment of their own. But the companies that are deciding not to sell these controversial products as a powerful protest movement gains traction may be motivated more by a careful calculation of financial and public-relations risks than by concern for Black lives.

Black people in the U.S. are killed by police at more than twice the rate of white Americans, and in Minneapolis, where in May George Floyd was killed by police, officers are seven times more likely to use force against Black people than against white people. But in the 21st century, police violence is not limited to the overtly physical kind. Although we may never know its full extent, there is real evidence that covert, high-tech surveillance of Black activists and journalists helps drive brutal policing.

In 2015, facial-recognition technology was used to track and arrest Baltimore protesters reacting to the police murder of Freddie Gray, the young Black man who died in police custody from spinal injuries for which no one was held responsible. In the past few weeks, Homeland Security has spied on protesters in 15 cities using drone surveillance, while police body cameras equipped with facial-recognition technology have captured images of protesters. The comedian John Oliver has raised concerns that unchecked facial recognition is now one of policing’s most powerful tools.

Rooted in discredited pseudoscience and racist eugenics theories that claim to use facial structure and head shape to assess mental capacity and character, automated facial-recognition software uses artificial intelligence, machine learning, and other forms of modern computing to capture the details of people’s faces and compare that information to existing photo databases with the goal of identifying, verifying, categorizing, and locating people.

While law enforcement agencies specifically use the technology to monitor perceived threats and predict criminal behavior, the capabilities of facial recognition are far more extensive. The software can monitor your body through a combination of biometrics (measurements of physical and behavioral characteristics), anthropometrics (measurements of body morphology), and physiometrics (measurements of bodily functions such as heart rate, blood pressure, and other physical states). America has long used science and technology to categorize and differentiate people into hierarchies that, even today, determine who is able and unable, deserving and undeserving, legitimate and criminal. As with the scientific racism of old, facial recognition doesn’t simply identify threats; it creates them, and as such intensifies a dangerous digital moment with a long history.

For at least 10 years, I have been one of many racial-justice, civil-rights, and privacy advocates warning that facial-recognition and biometric technologies would be used to supercharge police abuses of power and worsen racial discrimination. Less than six months ago, Microsoft dismissed the idea of a moratorium. Amazon has at times rebuked civil-rights concerns, despite research showing that facial-recognition systems tend to misidentify people of color and women at higher rates than white people and men. In one study, Asian American and Black people were up to 100 times more likely to be misidentified than white men, and Native Americans had the highest false-positive rate of all ethnicities. Microsoft had no existing facial-recognition contracts with local police departments in the United States, but claims in its own materials to be a leader in the facial-recognition industry.

IBM has been more responsive to the call from activists, refusing to sell facial-recognition services completely, because of their potential for abuse. IBM’s written announcement said the company “firmly opposes” the use of facial recognition “for mass surveillance, racial profiling, violations of basic human rights and freedoms.” CEO Arvind Krishna also called for a national conversation over whether facial recognition should be used by law enforcement at all, and the company has established an internal AI-ethics board. While this stance appears driven by genuine human-rights concerns, and may be the direct result of the fact that Krishna is the company’s first CEO of color in more than 100 years, it’s important to remember that, less than a decade ago, the company was building surveillance infrastructure in the Philippines, strengthening video-monitoring capabilities that enabled human-rights violations.

According to the Race After Technology author, Ruha Benjamin, modern invasive technologies such as facial recognition and electronic monitors reproduce and supersize racial inequality in an era of big data, and offer few tangible metrics with which to measure effectiveness. These technologies are as destructive to democracy as they are discriminatory. The national conversation about how to end excessive, brutal, and discriminatory policing is evolving, and as it claims more victories, there is a growing belief that defunding the infrastructure of policing must also dismantle abusive digital-surveillance infrastructure. Prohibiting police access to facial recognition and other high-tech tools used to criminalize Black communities is required to defend Black lives in the 21st century.

The scope of this technology and its influence on policing is staggering. For the past several years, facial-recognition technologies have proliferated in law enforcement like wildfire through thirsty terrain. Incredibly, half of all U.S. adults are already included in police facial-recognition databases, driving the kind of persistent monitoring a 2016 report from the Georgetown Law Center on Privacy and Technology called a “perpetual line-up.” The report also found that as many as one in four police departments across the U.S. can access facial-recognition tools, and many use them in routine criminal investigations.

Immigration and Customs Enforcement has used facial-recognition technology to mine state databases, including the huge trove of DMV records, scanning millions of people’s photos without their knowledge or consent. In Maryland, a state that grants special driver’s licenses to undocumented immigrants, ICE has used facial-recognition software to scan millions of Maryland driver’s-license photos without a warrant or any other form of state or court approval, an unprecedented and dangerous level of access. The FBI has joined the fray, conducting 4,000 facial-recognition searches a month. Twenty-one states allow this access.

Here’s another way to think about how influential this emerging technology already is. Clearview AI is one of the nation’s most powerful facial-recognition companies, with roots in America’s extreme political right. One of the primary investors in Clearview AI is Peter Thiel, an early investor in Facebook and a co-founder of the CIA-backed big-data start-up Palantir. Clearview AI has technology that not only allows law enforcement or private corporate subscribers, such as the NBA, Best Buy, and Macy’s, to connect faces to personal data in real time, but can also now pair with augmented-reality glasses in a terrifying innovation that could potentially identify every person a user sees. The Clearview database’s size significantly exceeds that of others in use by law enforcement, with about 3 billion photographs. The FBI’s own database, which pulls from passport and driver’s-license photos, is the second largest, with 641 million images of people’s faces. Clearview AI built its database by scraping billions of photos from social-media platforms such as Facebook and Twitter, in violation of their terms of service. Clearview’s database is widely available to law enforcement across the country, and its technology has been adopted by more than 600 law-enforcement agencies in the past year alone. In response to legal objections by some of the platforms from which it has scraped its 3 billion photos, Clearview CEO Hoan Ton-That made a rather thin legal claim that the company had a First Amendment right to the data because the images were publicly available.

One of the police departments that uses Clearview AI’s database is the Minneapolis Police Department. This is, of course, the department that sparked the recent uprisings; the (now fired) police officer Derek Chauvin has been charged with second-degree murder, after he pressed his knee into George Floyd’s neck for nearly nine minutes. As of February 2020, hundreds of searches had been conducted by the Minneapolis Police Department, the Hennepin County Sheriff’s Office, the St. Paul Police Department, and the Minnesota Fusion Center.

Scary enough for you? Well, there’s more.

This technology has also appeared in schools and airports, and within the devices we use every day. In January 2020, the Lockport City School District, in New York State, became one of the first known districts in the country to adopt facial-recognition technology on school property. In 2017, President Donald Trump signed an executive order to deploy the use of airport facial-recognition biometrics for all international travelers, citizen and noncitizen alike, allowing U.S. citizens to opt out. But in 2019, U.S. Customs and Border Protection sought approval and then withdrew its proposal to remove the opt-out option and subject American citizens to mandatory face scans. Meanwhile, companies such as Facebook and Apple have integrated facial recognition into their platforms and devices to allow users to unlock phones and tag photos. The point is, facial recognition is widespread, and much of these data are extracted by a police system that Black communities, and many diverse communities across America, experience as unrestrained, violent, and racist.

Black faces have long been considered a threat by American law enforcement. It’s discomforting, even dystopian, to think that when I step out of my home to exercise my constitutional right to protest, I will encounter a system that seems hell-bent on ending my life. My Black face can be identified, verified, and tracked without my consent or knowledge. While announcements by IBM, Amazon, and Microsoft generated tons of press, and ostensibly responded to the rising Black-led movement against police violence, there is still cause for grave concern.

Including Facebook, Nike, McDonald’s, and Coca-Cola, brands that have faced accusations of racism are now making all kinds of public pledges to fight for Black lives. Though protests for Black civil rights are neither a brand that can be purchased nor a fad meant to fade with time, it is not unusual for private companies to take advantage of moments of mass awareness for their own gain. The facial-recognition industry is no different.

Just days before Microsoft made its apparently magnanimous announcement to stop selling a technology it has never sold to local U.S. police departments, more than 250 Microsoft employees released a poignant letter about their personal experiences with police violence and urged the company to cut its ties with police departments, which extend far beyond the facial-recognition market. Microsoft claims that it won’t allow its facial-recognition technology to be used in any way that puts fundamental rights at risk, but more than a decade ago, Microsoft partnered with the NYPD to create the Domain Awareness System to form the spine of the surveillance apparatus for a city whose officers are more than five times as likely to kill Black New Yorkers as white residents. In an alarming move for Black activists working to defund police, Microsoft even pitched DAS as an alternative to high police salaries. Meanwhile, in an email, Microsoft requested that the popular Black artist Shantell Martin make a Black Lives Matter mural in Manhattan “while the protests are still relevant.” Clearly aware of the timing, at the height of the protests in June, the company also posted powerful quotes on Twitter about the impact of systemic racism on its Black employees—who, by the way, constitute a meager 4.4 percent of Microsoft’s global workforce, including retail and warehouse workers, and less than 3 percent of its U.S. executives, directors, and managers, according to the company’s 2019 diversity and inclusion report. The dissonance between the company’s stated values and actions cannot be ignored.

Amazon has also struggled in the conflict between its rhetoric and its actions. The company has spent the past two years aggressively marketing Rekognition, its facial-recognition product, to hundreds of police departments and federal agencies, even updating it to add “fear detection.” Amazon has also heavily marketed a “smart” doorbell product called Ring that records video footage of people who come to your door, which police can gain access to. Last year, Amazon claimed that it had partnered with more than 200 local U.S. law-enforcement agencies to share the locations of installed Ring cameras and to promote Ring in their local communities. Privacy leaders at the Electronic Frontier Foundation have raised concerns that the technology is simply a “high-speed digital mechanism by which people can make snap judgements about who does, and who does not, belong in their neighborhood, and summon police to confront them.” As protests against police violence swelled, Amazon actively expanded partnerships between Ring and the police.

Since the police killing of George Floyd, Amazon’s Prime Video has been featuring films on the Black experience. Amazon also announced $10 million in contributions to organizations supporting justice and equity; the company claimed to “stand in solidarity with our Black employees, customers, and partners,” and that it is “committed to helping build a country and a world where everyone can live with dignity and free from fear.” Not two weeks later, Amazon is now under fire for advertising a “chicken and waffles” Juneteenth celebration for its workers at a Chicago warehouse. Five years ago, 27 percent of Amazon’s U.S. workforce was Black, but the large majority—85 percent—of its Black workers held unskilled, low-wage jobs. Last year, the diversity numbers for Amazon’s Black employees hadn’t changed much: Only 8 percent of the company’s managers were Black as of December 2019, according to the company’s own data.

Although Amazon and Microsoft announced they would not sell facial-recognition products and services to local law enforcement, the companies did not indicate that they would not sell these dangerous products to the federal government, or overseas to international law-enforcement clients. Except for IBM, which has said it’s abandoning its facial-recognition services for good, no other company has committed to a moratorium or ban on the sale of its facial-recognition technology to federal law-enforcement agencies such as CBP, ICE, and the Drug Enforcement Administration, among others. In fact, though Microsoft does not currently have any facial-recognition contracts with law-enforcement agencies, according to the American Civil Liberties Union, recently published emails indicate that back in 2017, Microsoft aggressively pitched its facial-recognition technology to the DEA, the law-enforcement agency driving a war on drugs that has significantly expanded law-enforcement surveillance and filled prisons with Black, Latino, and Native bodies, with profoundly unequal outcomes. When asked directly whether Microsoft would sell facial-recognition technology to federal law enforcement, Microsoft President Brad Smith sidestepped the question, responding that the company wouldn’t let the technology be used in scenarios that led to bias against women and people of color.

President Trump retweeted former Acting Director of National Intelligence Richard Grenell’s June 12 tweet that Microsoft should be barred from federal contracts in response to the company’s announcement that they would not sell facial-recognition technology to police. It seems evident that the federal government intends to expand its use of facial recognition.

The tech giants making these technologies appear to be more conflicted. Google favored a moratorium on facial-recognition technology as early as January, announcing that it would not sell its tech without “resolving important technology and policy questions.” Yet Google still came under fire when media reported that contractors sent by the company to various cities across the U.S., in a misguided attempt to improve the accuracy of its technology, had misled homeless Black people into allowing their faces to be scanned.

And then there is IBM. Exiting the facial-recognition market does not necessarily mean that it’s cutting ties with law enforcement. IBM will continue to sell artificial-intelligence predictive-policing tools, despite overwhelming evidence that threat or crime predictions based on historical arrest and crime data exacerbate existing racial biases. This seems to directly contradict elements of Krishna’s announcement, which stated that the company opposes the use of “any technology, including facial-recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values.”

To add to these mixed messages, Amazon, Microsoft, and Google have continued efforts to ensure federal regulation that offers a stable and profitable market in which facial-recognition technology is, in fact, used by law enforcement, in direct opposition to the movement the companies claim to support. Amazon’s CEO, Jeff Bezos, announced in 2019 that Amazon’s public-policy team was writing its own facial-recognition laws to pitch to federal policy makers, and a Microsoft employee wrote a recently passed Washington State law that does almost nothing to limit or prohibit government use of facial recognition.

IBM, Amazon, and Microsoft are all members of the Integrated Justice Information Systems Institute, which recently teamed up with the International Association of Chiefs of Police to publicize a catalog detailing the ways law enforcement can use facial recognition. At least a half-dozen firms are positioned to lobby against a facial-recognition ban. Facial-recognition companies seem to have no intention of getting out of the game or surrendering a single dollar to unfavorable regulation.

For Black communities, and for all who have suffered generational brutality at the hands of law enforcement, our future relationship to policing must not be directed by private industry. A visionary and inclusive protest movement to dismantle facial-recognition technology is already making sure that it won’t be. Groups such as the Electronic Frontier Foundation, Fight for the Future, Color of Change, MediaJustice, and Mijente, among many others, have called for a complete ban on facial-recognition technology for law enforcement, at all levels of government.

Lawmakers have heeded the call to action. Over the past 12 to 18 months, at least nine U.S. cities have banned facial recognition, including Oakland, San Francisco, Berkeley, and seven Massachusetts cities, among them Somerville, Brookline, Easthampton, Boston, Springfield, Cambridge, and Northampton. Oregon and New Hampshire have banned facial-recognition technology in police body cameras, and California’s three-year moratorium on the same went into effect in January 2020. States such as New York and Massachusetts are also considering legislation that would prohibit facial-recognition technology in connection with officer cameras, place a moratorium on all law-enforcement use, and enact broader moratoriums on all government use of facial recognition, which would cover other state agencies and officials.

Democratic lawmakers recently proposed a moratorium on facial-recognition use by federal agencies, as well as on technology for voice recognition and gait recognition, with tough restrictions that withhold federal grants from state and local governments until they pass their own bans. While perennial government moratoriums are not as restrictive or effective as an outright ban, they are far more useful than short-term corporate moratoriums and remain a useful way to restrain the use of facial-recognition technology by law enforcement, on the road to its abolition. Now all that facial-recognition companies need to do is get out of the way.

In Detroit, a city well known for passionate, community-led organizing, a caravan of about 40 cars recently drove around the homes of Detroit City Council members demanding that they vote against a $219,000 contract extension to a facial-recognition-software provider. The Detroit Police Department pairs images taken from various sources, including Project Green Light—which installs high-definition cameras paid for by local businesses—with its facial-recognition software to identify suspects in violent crimes. Pressuring local lawmakers to vote against specific facial-recognition contracts is another way to defund face surveillance at the city level.

On the federal level, the Department of Homeland Security has made about $1.8 billion available this fiscal year for local communities in its preparedness-grants program, but for some of these grants, localities must agree to allocate at least 25 percent to law enforcement. And then there’s the Department of Defense’s 1033 Program, which militarizes civilian police by providing surplus military equipment to law-enforcement agencies. Additional major sources of funding for facial-recognition technology are the budgets for federal programs such as U.S. Customs and Border Protection, the Transportation Security Administration, and the Secret Service.

And then there are police departments’ corporate backers. Companies across the U.S. are partnering with police foundations that donate millions to local police departments for things such as surveillance networks, software, and equipment. Amazon, Motorola, Verizon, Facebook, Google, and AT&T are all major corporate supporters of these foundations.

Joy Buolamwini, the founder of the Algorithmic Justice League, said in testimony before Congress, “These tools are too powerful, and the potential for grave shortcomings, including extreme demographic and phenotypic bias, is clear. We cannot afford to allow government agencies to adopt these tools and begin making decisions based on their outputs today and figure out later how to rein in misuses and abuses.”

White supremacy defines how society is structured and how new technologies are used, and it moves at the pace of capital. American policing has, for centuries, upheld white supremacy’s laws and order at the brutal expense of Black lives. As activists innovate strategies to defund American policing and invest in communities, we must not open the door to an expanded and unaccountable security state with automated eyes trained on Black and brown dissent. Ending the system of U.S. policing as we know it must be coupled with strategies to ban facial recognition and the other 21st-century surveillance technologies that empower that system.

The violence of American policing kills 1,000 people a year, leaves many more with serious injuries, and imprints upon generations of Black and brown people a brutal rite of passage. In an era when policing and private interests have become extraordinarily powerful—with those interests also intertwined with infrastructure—short-term moratoriums, piecemeal reforms, and technical improvements on the software won’t defend Black lives or protect human rights.

In my vision for a nation that invests in Black life and dignity, facial recognition and other forms of biometric policing don’t need more oversight, or to be reformed or improved. Facial recognition, like American policing as we know it, must go.

Malkia Devich-Cyril is an activist, a writer, and a public speaker on the issues of digital rights, narrative power, Black liberation, and collective grief. Devich-Cyril is also a senior fellow at MediaJustice, where she was the founding executive director.