This article is more than 1 year old

America's top maker of cop body cameras says facial-recog AI isn't safe

You listening, Cressida Dick?

Analysis America's largest manufacturer of body cameras – and the biggest supplier to police forces across the United States – says today's facial recognition technology is not safe for making serious decisions.

Speaking during its second-quarter earnings call with investors this week, the CEO of Axon, Rick Smith, answered a question about whether the company would be adding facial-recognition systems to its suite of products and, if so, whether that would come with an additional cost.

Smith responded in clear terms that current facial recognition is simply not accurate enough to "make operational decisions," ie: for police to use it to recognize individuals and use positive responses as justification for automatically and unquestioningly apprehending people. Well, the computer says you're wanted, so here come the cuffs, we can imagine a conversation with officers going.

"We don’t have a timeline to launch facial recognition," Smith said on the conference call (listen in at around the 40-minute mark), noting that Axon doesn't have a team "actively developing it" either. He added: "This is technology that we don’t believe the accuracy thresholds are right where they need to be to make operational decision off of facial recognition."

That blunt assessment is important due to increasingly widespread use of the technology both by social media companies like Facebook, smart home companies like Nest, and police forces like London's Metropolitan police.

There is a real risk that because facial recognition can be seen to be working in closely defined situations – Facebook and Nest can, for example, run facial recognition against a small set of likely individuals: your friends or people that have visited your house – that more serious applications are considered before the technology is ready.

Regulation

Last month, Microsoft called for Congress to regulate the US government's use of facial-recognition technology – technology that it some cases it provides – after it came under fire for selling face-probing technology to Uncle Sam's Immigration and Customs Enforcement (ICE) agency.

London, UK - March, 2018. Police officers patrolling Leicester Square and Piccadilly Circus in central London. Pic Paolo Paradiso / Shutterstock.com

Zero arrests, 2 correct matches, no criminals: London cops' facial recog tech slammed

READ MORE

The Met Police, over in London, England, has been also been running a controversial trial of facial recognition software at a number of public events – something that led its head, Cressida Dick, being quizzed by the London Assembly.

During that grilling, Dick acknowledged the trials had not been very successful: reports showed a 98 per cent false-positive rate. "It’s a tool, it’s a tactic. I’m not expecting it to result in lots of arrests," she said – but admitted the Met intends to keep plodding ahead with it regardless.

And that is in addition the UK Home Office asking for tenders for a £4.6m ($5.9m) contract to build a facial recognition system and database. The UK police's drive to introduce the technology despite its terrible performance and privacy concerns, even invited the ire of the UK Biometrics Commissioner, Professor Paul Wiles, who criticized the government for failing to produce a formal document outlining its biometrics strategy, and argued that it needs to provide a proper public accounting of such public trials.

Wiles is also concerned about an estimated £100m that has been plowed into efforts to link up cops’ IT systems and develop digital forensics and in his annual report noted that the police’s use of new biometric tech isn’t always organized or systemic, with a "worrying vacuum" in governance and lack of oversight.

The long-delayed biometrics strategy from the UK government finalized emerged this summer, runs to a puny 14 pages, and was described looking like "a late piece of homework with a remarkable lack of any strategy.”

Threshold

Back in the United States, while Axon's CEO was carefully to stress his company is not planning to offer facial recognition any time soon, he also made plain that he felt it was coming and the company would offer it as soon as it met the necessary "accuracy thresholds."

He also warned that the entire technology could be "imperiled" if it was rolled out too early. "Once we’ve got a tight understanding of the privacy and accountability controls, we need to ensure that it will be acceptable by the public at large," he argued. "At that point we would move into commercialization of that capability."

Smith went on: "But this is one where we think you don’t want to be premature and end up with technical failures with disastrous outcomes, or something where there’s some unintended use case where it ends up being unacceptable publicly, and imperils the long term use of the technology." ®

Hat-tip to Dave Gershgorn of Quartz for, from we can tell, first noting Axon's comments.

More about

TIP US OFF

Send us news


Other stories you might like