Skip to main contentSkip to navigationSkip to navigation
Hackney town hall
Hackney council in east London has abandoned using data analytics to help it predict which children are at risk of neglect and abuse. Photograph: Justin Setterfield/Getty Images
Hackney council in east London has abandoned using data analytics to help it predict which children are at risk of neglect and abuse. Photograph: Justin Setterfield/Getty Images

Councils scrapping use of algorithms in benefit and welfare decisions

This article is more than 3 years old

Call for more transparency on how such tools are used in public services as 20 councils stop using computer algorithms

Councils are quietly scrapping the use of computer algorithms in helping to make decisions on benefit claims and other welfare issues, the Guardian has found, as critics call for more transparency on how such tools are being used in public services.

It comes as an expert warns the reasons for cancelling programmes among government bodies around the world range from problems in the way the systems work to concerns about bias and other negative effects. Most systems are implemented without consultation with the public, but critics say this must change.

The use of artificial intelligence or automated decision-making has come into sharp focus after an algorithm used by the exam regulator Ofqual downgraded almost 40% of the A-level grades assessed by teachers. It culminated in a humiliating government U-turn and the system being scrapped.

The fiasco has prompted critics to call for more scrutiny and transparency about the algorithms being used to make decisions related to welfare, immigration, and asylum cases.

The Guardian has found that about 20 councils have stopped using an algorithm to flag claims as “high risk” for potential welfare fraud. The ones they flagged were pulled out by staff to double-check, potentially slowing down people’s claims without them being aware.

Previous research by the Guardian found that one in three councils were using algorithms to help make decisions about benefit claims and other welfare issues.

Research from Cardiff Data Justice Lab (CDJL), working with the Carnegie UK Trust, has been looking at cancelled algorithm programmes.

According to them, Sunderland council has stopped using one which was designed to help it make efficiency savings of £100m.

Their research also found that Hackney council in east London had abandoned using data analytics to help predict which children were at risk of neglect and abuse.

The Data Justice Lab found at least two other councils had stopped using a risk-based verification system – which identifies benefit claims that are more likely to be fraudulent and may need to be checked.

One council found it often wrongly identified low-risk claims as high-risk, while another found the system did not make a difference to its work.

Dr Joanna Redden from the Data Justice Lab said: “We are finding that the situation experienced here with education is not unique … algorithmic and predictive decision systems are leading to a wide range of harms globally, and also that a number of government bodies across different countries are pausing or cancelling their use of these kinds of systems.

“The reasons for cancelling range from problems in the way the systems work to concerns about negative effects and bias. We’re in the process of identifying patterns, but one recurring factor tends to be a failure to consult with the public and particularly with those who will be most affected by the use of these automated and predictive systems before implementing them.”

The Home Office recently stopped using an algorithm to help decide visa applications after allegations that it contained “entrenched racism”. The charity the Joint Council for the Welfare of Immigrants (JCWI) and the digital rights group Foxglove launched a legal challenge against the system, which was scrapped before a case went to court.

Foxglove characterised it as “speedy boarding for white people” but the Home Office said it did not accept that description. “We have been reviewing how the visa application streaming tool operates and will be redesigning our processes to make them even more streamlined and secure,” the Home Office added.

Martha Dark, the director and co-founder of Foxglove, said: “Recently we’ve seen the government rolling out algorithms as solutions to all kinds of complicated societal problems. It isn’t just A-level grades … People are being sorted and graded, denied visas, benefits and more, all because of flawed algorithms.”

She said poorly designed systems could lead to discrimination, adding that there had to be democratic debate and consultation with the public on any system that affected their lives before that system was implemented. “These systems have to be transparent, so bias can be identified and stopped.”

Police forces are increasingly experimenting with the use of artificial intelligence or automated decision-making.

The West Midlands police and crime commissioner’s strategic adviser, Tom McNeil, said he was “concerned” businesses were pitching algorithms to police forces knowing their products may not be properly scrutinised.

McNeil said: “In the West Midlands, we have an ethics committee that robustly examines and publishes recommendations on artificial intelligence projects. I have reason to believe that the robust and transparent process we have in the West Midlands may have deterred some data science organisations from getting further involved with us.”

Research from the Royal Society of Arts published in April found at least two forces were using or trialling artificial intelligence or automated decision-making to help them identify crime hotspots – Surrey police and West Yorkshire police.

Others using algorithms in some capacity or other include the Met, Hampshire Constabulary, Kent police, South Wales police, and Thames Valley police.

Asheem Singh, the RSA thinktank’s director of economics, said: “Very few police consulted with the public. Maybe great work is going on but police forces don’t want to talk about it. That is concerning. We are talking about black-box formulae affecting people’s livelihoods. This requires an entire architecture of democracy that we have not seen before.”

Without consultation “the principle of policing by consent goes out of the window”, Singh added.

The National Police Chief’s Council said it was unable to comment.

The Centre for Data Ethics and Innovation, an independent advisory body, is reviewing potential bias in algorithms. “Our review will make recommendations about how police forces and local authorities using predictive analytics are able to meet the right standards of governance and transparency for the challenges facing these sectors,” it said.

Most viewed

Most viewed