It's time for AI ethics to grow up

The ethical challenges of AI are well known – but there's been little action. Now lawmakers need to step in

The ethical challenges of artificial intelligence are well known. In 2020 we will realise that AI ethics will need to be codified in a realistic and enforceable way. Not doing so will present an existential threat to individuals, companies and society.

Despite being a highly experimental and often flawed technology, AI is already in widespread use. It is often used when we apply for a loan or a job. It is used to police our neighbourhoods, to scan our faces to check us against watchlists when we shop and walk around in public, to sentence us when we are brought before a judge and to conduct aspects of warfare. All this is happening without a legal framework to ensure that AI use is transparent, accountable and responsible. In 2020 we will realise that this must change.

Concerns about AI are not confined to civil-liberties and human-rights activists. London's Metropolitan Police commissioner Cressida Dick has warned that the UK risks becoming a “ghastly, Orwellian, omniscient police state” and has called for law-enforcement agencies to engage with ethical dilemmas posed by AI and other technologies. Companies that make and sell facial-recognition technology, such as Microsoft, Google and Amazon, have repeatedly asked governments to pass laws governing its use – so far to little avail.

Many people would argue that this debate should go even wider than AI, calling on us to embed ethics into every stage of our technology. This means asking not just “Can we build it?” but “Should we?” It means examining the sources of funding for our technology (such as Saudi Arabia, which is a big investor in Softbank’s Vision Fund). And it means recognising that the lack of diversity and inclusion in technology creates software and tools that exclude much of the population, just as there is deep and damaging bias in our datasets.

We will challenge the idea, long held by many technology enthusiasts, that technology is “neutral”, that we should allow technology companies to make money while refusing to take responsibility beyond the bare minimum of compliance with the law.

Kate Crawford, co-founder of the AI Now Institute, challenged this position in her lecture to the Royal Society in 2018 when she asked: “What is neutral? The way the world is now? Do we think the world looks neutral now?” And Shoshana Zuboff, in her 2019 book Surveillance Capitalism, argued that the power of technology can be understood by the answers to three questions: “Who knows? Who decides? Who decides who decides?” Even a brief survey of the world of technology today shows clearly that that power must be contained.

In 2020, we will understand the need to codify our ideas about what would make AI – and technology as a whole – ethical. Decision-makers in government and the private sector are already exploring ethics as part of their thinking and planning for the future of work and society. Next year we will need to continue that search and embed ethical values in legislation.

What’s missing at the moment is a rigorous approach to AI ethics that is actionable, measurable and comparable across stakeholders, organisations and countries. There’s little use in asking STEM workers to take a Hippocratic oath, for example, having companies appoint a chief ethics officer or offering organisations a dizzying array of AI-ethics principles and guidelines to implement if we can’t test the efficacy of these ideas.

In the year ahead we will see the need for that rigour. Until now, AI ethics has felt like something to have a pleasant debate about in the academy. In 2020 we will realise that not taking practical steps to embed them in the way we live will have catastrophic effects.

Stephanie Hare is a technology, politics and history researcher and broadcaster. Her book on technology ethics will be published in 2020

This article was originally published by WIRED UK