Skip to main contentSkip to navigationSkip to navigation
Facial recognition technology
Facial recognition technology is now sold as standard in CCTV systems. Photograph: Getty Images/iStockphoto
Facial recognition technology is now sold as standard in CCTV systems. Photograph: Getty Images/iStockphoto

Lack of guidance leaves public services in limbo on AI, says watchdog

This article is more than 4 years old

CCTV commissioner says he gets many queries about facial recognition and other tools

Police forces, hospitals and councils struggle to understand how to use artificial intelligence because of a lack of clear ethical guidance from the government, according to the country’s only surveillance regulator.

The surveillance camera commissioner, Tony Porter, said he received requests for guidance all the time from public bodies which do not know where the limits lie when it comes to the use of facial, biometric and lip-reading technology.

“Facial recognition technology is now being sold as standard in CCTV systems, for example, so hospitals are having to work out if they should use it,” Porter said. “Police are increasingly wearing body cameras. What are the appropriate limits for their use?

“The problem is that there is insufficient guidance for public bodies to know what is appropriate and what is not, and the public have no idea what is going on because there is no real transparency.”

The watchdog’s comments came as it emerged that Downing Street had commissioned a review led by the Committee on Standards in Public Life, whose chairman had called on public bodies to reveal when they use algorithms in decision making.

Lord Evans, a former MI5 chief, told the Sunday Telegraph that “it was very difficult to find out where AI is being used in the public sector” and that “at the very minimum, it should be visible, and declared, where it has the potential for impacting on civil liberties and human rights and freedoms”.

AI is increasingly deployed across the public sector in surveillance and elsewhere. The high court ruled in September that the police use of automatic facial recognition technology to scan people in crowds was lawful.

Its use by South Wales police was challenged by Ed Bridges, a former Lib Dem councillor, who noticed the cameras when he went out to buy a lunchtime sandwich, but the court held that the intrusion into privacy was proportionate.

Durham police have spent three years evaluating an AI tool devised by Cambridge University to predict whether an arrested person is likely to reoffend and so should not be released on bail.

Similar technologies used in the US, where they are also guide sentencing, have been accused of concluding that black people are more likely to be future criminals, but the results of the British trial are yet to be made public.

The committee is due to report to Boris Johnson in February, but Porter said the task was urgent because of the rapid pace of technological change and an unclear system of regulation in which no single body had oversight.

The information commissioner is responsible for the use of personal data but not surveillance, while Porter’s office regulates the use of CCTV systems and all technologies attached to them, including facial recognition and lip-reading software.

“We’ve been calling for a wider review for months,” Porter said. “The SCC, for example, is the only surveillance regulator in England and Wales and we date back to when the iPhone 5 was new and exciting. So much has changed since.”

Most viewed

Most viewed