Artificial intelligence and automation are responsible for a growing number of decisions by public authorities in areas like criminal justice, security and policing and public administration, despite having proven flaws and biases. Facial recognition systems are entering public spaces without any clear accountability or oversight. Lawyers must play a greater role in ensuring the safety and accountability of advanced data and analytics technologies, says Karen Yeung at the University of Birmingham.

How has the law been pushed aside in the age of AI?

Law in the age of AI

The dream of artificial intelligence stretches back seven decades, to a seminal paper by Alan Turing. But only recently has AI been commercialized and industrialized at scale, weaving its way into every nook and cranny of our lives. Fusing statistics, computer science and cognitive psychology, these complex digital systems, whether physical robots or software-enabled services, can reproduce and often surpass human-level intelligence in areas like reasoning, visual processing, pattern recognition and autonomous action.  

Grand claims are made about the size and scale of AI’s potential benefits and power, and with good reason. AI systems are proving better than humans at diagnosing some forms of cancer and have impressive predictive prowess in everything from weather to forest fires. They could reduce administration in vital social services, identifying children at risk of abuse for instance, or sifting license plate photographs to prioritizing readable images for follow-up analysis1. Better, more integrated crime data systems could also help prevent high-risk individuals from escaping investigation and securing positions for which they should be ruled out2.

But like any new technology, AIs have flaws, vulnerabilities and unintended effects, the list of which grows in step with its rollout in society. Facial recognition, used to scan crowds for suspected criminals, is less adept at identifying non-white faces. By increasing the likelihood of false positives, ‘face-rec’ poses a risk of increased incarceration or criminalization of minority groups. Crime prediction also seems to show signs of racial and minority bias, up ramping of surveillance in deprived geographies based on past law-breaking and wrongly flagging minority groups as more likely to reoffend without reliable evidence of their accuracy or effectiveness in preventing or deterring crime.

Karen Yeung, interdisciplinary professorial fellow in law, ethics and informatics at Birmingham Law School and the School of Computer Science at the University of Birmingham, has studied technology ethics in contentious domains, including human genome editing, and she sees the rise of AI in government as part of a wider trend of ‘New Public Analytics’, in which data analytics is increasingly enthroned at the heart of public administration. This is happening with the near absence of legal scrutiny and review despite the risks.

Take facial recognition, already in place in public spaces for security and policing purposes3. This “fundamentally reverses the presumption of liberty on which British constitutional culture has long rested,” says Yeung. “It proceeds on the premise that ‘everyone is under suspicion and the state is entitled to surveil and identify individuals in real time’. I think that is very, very dangerous in terms of what it implies for state-citizen relationships, particularly in a state that are constructed on the basis of commitments to democracy and individual freedom.”

Crime prediction and recidivism risk are a second AI application fraught with legal problems. A ProPublica investigation into an algorithm-based criminal risk assessment tool found the formula more likely to flag black defendants as future criminals, labelling them at twice the rate as white defendants, and white defendants were mislabeled as low-risk more often than black defendants4, 5, 6. “We need to think about the way we are mass producing decisions and processing people, particularly low income and low-status individuals, through automation and their consequences for society,” says Yeung.

This should all be a matter of legal discussion and public deliberation rather than confined to the so-called ‘digiterati’. Automated decision-making systems employed to make decisions about individuals by public authorities, including within the criminal justice system, may fail to respect the right to a fair trial and due process, particularly if such systems are biased or if individuals do not know such analytics are being applied to them, or otherwise deprived of an opportunity to contest those decisions.

The right to privacy, enshrined in the European Convention of Human Rights, is also undermined by the use of face-rec in public  and the way in which various AI systems rely upon the collection and processing of personal data must ensure that they respect fundamental rights to data protection.  While there are many such systems that have the potential to improve individual and collective welfare, care must be taken to respect data protection laws.  For example, DeepMind, Google’s AI unit, violated UK data protection laws and patient privacy rules during the development and testing of an app for the NHS.

In future, AI could transgress these and other human rights in varied and novel ways.  Precursors to lethal autonomous weapons (LAWS), which could independently identify and kill human targets, are already thought to exist in some militarily advanced countries despite the technology seeming to contravene the Geneva Convention. Deep-fakes – synthetically produced videos of real people saying or doing things they did no say or do – can be used for a range of ends, from encouraging fraudulent activity in the corporate sector, to politicians inciting violence.

From “AI ethics” to the rigour of the law

NGOs are doing important work illuminating AI’s risks to human rights and liberties, including Amnesty International, Access Now, Human Rights Watch, Privacy International, Liberty, the American Civil Liberties Union (ACLU), and the Electronic Freedom Foundation (EFF). Tech companies are also responding to real and hypothesized risks of their inventions. DeepMind has created an ‘independent advisory panel’. Microsoft has published ethical guidelines to which it claims it will adhere. Google employees staged a walkout to oppose to the company’s selling of AI software to the US military.

But for Yeung, the current ‘AI ethics’ discourse, including discussions which have led to a variety of concrete governance mechanisms, including the establishment of data ethics units by some governments, are framed in a manner which tends to favour industry self-regulation, based on questionable assumptions that the tech industry can be trusted to mark its own homework. “We're seeing policy makers starting to take seriously the public anxiety about AI,” says Yeung. “But my worry is that these debates are being framed in a way that is dominated by the rubric of ‘AI ethics’ which is unduly narrow, and in which the law and legal institutions have been fundamentally side-lined”.  

Yeung calls for the “development of effective and legitimate institutional mechanisms to prevent and forestall violations to human rights which these technologies may threaten, and to attend to the health of the larger collective and shared socio-technical environment in which democracy, human rights and the rule of law are anchored”.

Yeung believes we need to move quickly and decisively because, unlike other technology ethics domains like genome editing7, AI systems are already ‘out in the wild’ and available.  “You cannot edit human genomes in your backyard, so that effective ethical governance of human genome editing might rely primarily on institutional expertise. By contrast, AI is built primarily by software designers and developers, many of whom lack any formal training, let alone training in professional ethics: the cat is out of the bag”.

The importance of law

Yeung is optimistic that laws and legal frameworks can deal with AI’s emergent risks. The EU General Data Protection Regulation (GDPR), for instance, contains provisions, aimed at providing effective legal safeguards for data-subjects, including rights of transparency, of explanation and of contestation, for fully-automated decisions that process personal data.  Its principles of data minimisation and purpose specification can also be expected to mitigate some of the risks of misuse of personal data in the development and use of AI systems.

While still in its early days, GDPR shows that governments can enforce legal restraints on digital innovation, rather than relying on industry volunteerism and self-regulation. It proves that the public interest can be defended without scaring off digital innovators. And as it becomes tested through complaints, investigation and enforcement, its scope will be clarified and evolved.

“There's an unholy alliance between government and the tech industry, because so many governments see tech as the solution to their economic woes,” says Yeung. “They say ‘if we could just grow the tech industry, then we will attract entrepreneurs, stimulate the economy, increase government revenue, and thereby enhance the well-being of everyone, so we mustn’t regulate because this would stifle innovation’. This, in my view, is one of the main reasons why the law has hitherto been sidelined”. Acting regionally can help restrain corporate practices without encouraging companies to leave markets. No firm can afford to ignore the European Union’s over 500 million-strong, relatively high-income population. “GDPR proves that acting regionally is warranted and does not pose risks to innovation”.

Yeung argues that AI risks can be situated against long-established legal principles and responsibility models. For instance, intention and culpability are criteria for civil and criminal cases. Since computers lack capacity for subjective knowledge and intent, proving intent will require focusing on the human individuals who intentionally develop or deploy AI for dangerous or malicious purposes.

Negligence is another legal category of note in the AI era. Where computational agents generate decisions or behaviours that cause harm, but without any intent implied by their developers, there is still a legal question as to whether there was reasonable effort to foresee such harm. Technology developers can and should be held accountable for negligently failing to take steps to avoid such “reasonably foreseeable harms”. System defectiveness is a linked area. Manufacturers are legally liable if they release defective products that harm human users. Tech companies, especially in areas like autonomous vehicles, where human injury and deaths have already occurred, must be similarly accountable. This is even more important in a time when a growing number of vendors are piling into sectors like autonomous cars, smart systems and connected homes, in which accountability for failures could become even harder to establish. 

Yeung notes the possibility of considering mandatory insurance scheme, funded by relevant technology sectors, enabling individuals harmed by those technologies to seek financial compensation in cases where it is difficult to precisely pinpoint responsibility.

Civil society and universities have a major role to play in ensuring accountability and appropriate legal scrutiny on the tech industry but there is a capacity gap. Academic experts must maintain their independence, Yeung says, arguing against a tendency for industry to co-opt ethicists through funding. As a group, public lawyers and academic legal researchers must play a role in translating public law principles into the AI age, acquiring new skills where needed, and working with algorithm developers, computer and data scientists, and electronic engineers, to ensure the infrastructures that shape our daily lives are safely moored to legal principles that are rooted in foundational commitments to democracy and individual freedom.

Notes

  1. https://www.ncjrs.gov/pdffiles1/nij/252038.pdf
  2. https://www.ft.com/content/81af2e14-7fb9-11e8-bc55-50daf11b720d
  3. https://www.bbc.co.uk/news/technology-49320520
  4. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
  5. https://www.nature.com/articles/d41586-018-05469-3
  6. http://www.govtech.com/data/When-Big-Data-Gets-It-Wrong.html
  7. https://www.theguardian.com/science/2018/jul/17/genetically-modified-babies-given-go-ahead-by-uk-ethics-body

Explore

Discover more stories about our work and insights from our leading researchers.