Amazon’s Ring Planned Neighborhood “Watch Lists” Built on Facial Recognition

Documents hint the data could be shared with police, but Ring denies the features are in use or development.

Photo Illustration: Michael Houtz; Photo: Getty Images (2)

Ring, Amazon’s crimefighting surveillance camera division, has crafted plans to use facial recognition software and its ever-expanding network of home security cameras to create AI-enabled neighborhood “watch lists,” according to internal documents reviewed by The Intercept.

The planning materials envision a seamless system whereby a Ring owner would be automatically alerted when an individual deemed “suspicious” was captured in their camera’s frame, something described as a “suspicious activity prompt.”

It’s unclear who would have access to these neighborhood watch lists, if implemented, or how exactly they would be compiled, but the documents refer repeatedly to law enforcement, and Ring has forged partnerships with police departments throughout the U.S., raising the possibility that the lists could be used to aid local authorities. The documents indicate that the lists would be available in Ring’s Neighbors app, through which Ring camera owners discuss potential porch and garage security threats with others nearby.

Ring spokesperson Yassi Shahmiri told The Intercept that “the features described are not in development or in use and Ring does not use facial recognition technology,” but would not answer further questions.

This month, in response to continued pressure from news reports and a list of questions sent by Massachusetts Sen. Edward Markey, Amazon conceded that facial recognition has been a “contemplated but unreleased feature” for Ring, but would only be added with “thoughtful design including privacy, security and user control.” Now, we know what at least some of that contemplation looked like.

Mohammad Tajsar, an attorney with the American Civil Liberties Union of Southern California, expressed concern over Ring’s willingness to plan the use of facial recognition watch lists, fearing that “giving police departments and consumers access to ‘watch listing’ capabilities on Ring devices encourages the creation of a digital redline in local neighborhoods, where cops in tandem with skeptical homeowners let machines create lists of undesirables unworthy of entrance into well-to-do areas.”

Legal scholars have long criticized the use of governmental watch lists in the United States for their potential to ensnare innocent people without due process. “When corporations create them,” said Tajsar, “the dangers are even more stark.” As difficult as it can be to obtain answers on the how and why behind a federal blacklist, American tech firms can work with even greater opacity: “Corporations often operate in an environment free from even the most basic regulation, without any transparency, with little oversight into how their products are built and used, and with no regulated mechanism to correct errors,” Tajsar said.

Mounting Concern About Ring

Once known only for its line of internet-connected doorbell cameras marketed to the geekily cautious, Ring has quickly turned into an icon of unsettling privatized surveillance. The Los Angeles company, now owned by Amazon, has been buffeted this year by reports of lax internal security, problematic law enforcement partnerships, and an overall blurring of the boundaries between public policing and private-sector engineering. Earlier this year, The Intercept published video of a special online portal Ring built so that police could access customer footage, as well as internal company emails about what Ring’s CEO described as the company’s war on “dirtbag criminals that steal our packages and rob our houses.”

Previous reporting by The Intercept and The Information revealed that Ring has at times struggled to make facial recognition work, instead relying on remote workers from Ring’s Ukraine office to manually “tag” people and objects found in customer video feeds. The automated approach to watch-listing described in the documents reviewed by The Intercept may seem less unsettling than that human-based approach, but it potentially allows for a litany of its own problems, like false positives and other forms of algorithmic bias.

Anyone moving past a home equipped with Ring cameras is unavoidably sucked into a tech company dragnet, potential fodder for overeager chatter among the suburban xenophobe set.

In its public-relations efforts, Ring has maintained that only thieves and would-be criminals need to worry about the company’s surveillance network and the Neighbors app. From the way Ring’s products are designed to the way they’re marketed, the notion of “suspicion” remains front and center; Ring promises a future in which “suspicious” people up to “suspicious” things can be safely monitored and deterred from afar.

But “suspicious” is an entirely squishy concept with some very potentially dangerous interpretations, a byword of dog-whistling neighborhood racists who hope to drape garden-variety prejudice beneath the mantle of public safety. The fact remains that anyone moving past a home equipped with Ring cameras is unavoidably sucked into a tech company dragnet, potential fodder for overeager chatter among the suburban xenophobe set. To civil libertarians, privacy scholars, and anyone generally nervous about the prospect of their neighbors forming a collective, artificially intelligent video panopticon maintained by Amazon for unregulated use by police, Ring’s potential consequences for a community are clear.

Earlier this fall, Motherboard reported on a push by Ring to encourage camera owners to seek out, identify, and report to police anything and anyone they considered “unusual” in exchange for product discounts. According to the story, Ring “encouraged people to report all ‘suspicious activity,’ including loitering, ‘strange vans and cars,’ ‘people posing as utility workers,’ and people walking down the street and looking into car windows.”

Documents Show “Proactive Suspect Matching”

According to the Ring documents reviewed by The Intercept, which have not been previously reported, the company planned a string of potentially invasive new surveillance features for its product line, of which the facial recognition-based watch-list system is one part.

In addition to the facial watch lists, Ring has also worked on a so-called suspicious activity prompt feature that would alert users via in-app phone notification when a “suspicious” individual appears near their property’s video feeds. In one document, this feature is illustrated with a mockup of a screen in the Neighbors app, showing a shabbily dressed man walking past a Ring owner’s garage-mounted camera. “Suspicious Activity Suspected,” warns the app. “This person appears to be acting suspicious. We suggest alerting your neighbors.” The app then offers a large “Notify Neighbors” button. The document leaves how exactly “suspicious” is defined a mystery.

Related

Senators Press Amazon for Answers on Ring’s Sloppy Security Practices

A third potentially invasive feature referenced in the Ring documents is the addition of a “proactive suspect matching” feature, described in a manner that strongly suggests the ability to automatically identify people suspected of criminal behavior — again, whether by police, Ring customers, or both is unclear — based on algorithmically monitored home surveillance footage. Ring is already very much in the business of providing — with a degree of customer consent — valuable, extrajudicial information to police through its police portal. A “proactive” approach to information sharing could mean flagging someone who happens to cross into a Ring video camera’s frame based on some cross-referenced list of “suspects,” however defined. Paired with the reference to a facial recognition watch list and Ring’s generally cozy relationship with local police departments across the country, it’s easy to imagine a system in which individuals are arbitrarily profiled, tracked, and silently reported upon based on a system owned and operated solely by Amazon, without legal recourse or any semblance of due process. Here, says Tajsar, “Ring appears to be contemplating a future where police departments can commandeer the technology of private consumers to match ‘suspect’ profiles of individuals captured by private cameras with those cops have identified as suspect — in fact, exponentially expanding their surveillance capabilities without spending a dime.”

Researchers and legal scholars have for years warned that facial recognition and self-teaching machine learning technologies are susceptible to racial biases, and in many cases, can amplify and propagate such biases further — of particular concern in a law enforcement or security context, where racial prejudice is already systemic. A February review of the Neighbors app by Motherboard found that out of “100 user-submitted posts in the Neighbors app between December 6 and February 5, the majority of people reported as ‘suspicious’ were people of color.”

In an interview with The Intercept, Liz O’Sullivan, a privacy policy advocate and technology director at the Surveillance Technology Oversight Project, described Ring’s planned “proactive suspect matching” feature as “the most dangerous implementation of the word ‘proactive’ I’ve ever heard,” and questioned the underlying science behind any such feature. “All the AI attempts I’ve seen that try to detect suspicious behavior with video surveillance are absolute snake oil,” said O’Sullivan, who earlier this year publicly resigned from Clarifai, an AI image-analysis firm, over its work for the Department of Defense.

O’Sullivan explained that “there’s no scientific consensus on a definition of visibly suspicious behavior in biometrics. The important question to ask is, Who gets to decide what suspicious looks like? And the way I’ve seen it attempted in industry, it’s just an approximation.” Any attempt to hybridize humankind’s talents for prejudice with a computer’s knack for superhuman pattern recognition is going to result in superhuman prejudice, O’Sullivan fears. “In order for society to function well, police have to be impartial; we have to get to a place where they treat people equally under the law, not differently according to whatever way an algorithm ‘thinks’ we look.”

“All the AI attempts I’ve seen that try to detect suspicious behavior with video surveillance are absolute snake oil.”

For better or for worse, the potential to amplify the prejudices of its makers and customers is one that some members of the company’s staff have already grappled with, according to a Ring source who spoke to The Intercept on the condition of anonymity because they were not permitted to discuss company matters. This source recounted concerned conversations with colleagues about the possible social consequences of their company’s technology. “We were talking about Neighborhood” — Ring’s residential surveillance social network and police resource — “about how all it is is people reporting people in hoodies. We talked about the culture of fear that we’re perpetuating,” they said. Like O’Sullivan, the source was particularly concerned over the “proactive suspect matching” feature, which they said was “designed to basically aggregate videos and create a profile of a suspect who’s hit up multiple homes in a neighborhood,” and that the source believed would end up prone to racial bias. It would, this person said, “maybe catch porch pirates, but more realistically fuck over an innocent person of color.”

Ring’s spokesperson declined to answer a list of specific questions about the planned features, including what the company’s institutional definition of “suspicious” is, whether someone on a Ring “watch list” would ever be informed of this fact, or what someone would have to be “suspected” of in order to be labeled a “suspect” in Ring’s systems. “Any features we do develop,” Shahmiri said, “will include strong privacy protections and put our customers in control.”

Do you have a tip to share about Ring, its use of facial recognition, its relationship with police, or other information in the public interest? You can contact Sam Biddle via Signal at +1 (978) 261-7389, by email at sam.biddle@theintercept.com, or via The Intercept’s encrypted SecureDrop system.

Join The Conversation