Skip to main contentSkip to navigationSkip to navigation
Face-detection surveillance is one way technology can help to track the spread of Covid-19.
Face-detection surveillance is one way technology can help to track the spread of Covid-19. Photograph: Jochen Tack/Alamy Stock Photo/Alamy Stock Photo
Face-detection surveillance is one way technology can help to track the spread of Covid-19. Photograph: Jochen Tack/Alamy Stock Photo/Alamy Stock Photo

For all its sophistication, AI isn't fit to make life-or-death decisions

This article is more than 3 years old
Kenan Malik

‘Following the science’ is a disingenuous policy because mathematical reckoning and human judgments are very different things

Artificial intelligence is searching for the drugs to combat Covid-19. It enabled the pandemic to be tracked and information about it to be synthesised. It is diagnosing patients, triaging them, and identifying those in need of intensive care before their condition deteriorates.

There is much hype about the use of artificial intelligence (AI) to combat coronavirus. Some of it is justified. Its ability to sift through vast amounts of data and to recognise patterns has been of great value. Some of the hype is flannel. AI did not, as some claim, predict the pandemic before humans recognised it. Inflated claims, for instance for the accuracy of AI in diagnosing Covid-19, should be cause for suspicion, not celebration.

Some of it is cause for concern – such as the use of AI for mass surveillance. And some aspects of the pandemic have exposed the limitations of AI. Algorithms, such as those used by online retailers, trained on normal human behaviour, now find that people’s behaviour has completely changed when it comes to shopping or travel, and they are often flummoxed.

All this suggests a need to be clearer about what machines are good at and what humans are good at. The computer scientist and philosopher Brian Cantwell Smith distils that difference into the distinction between what he calls “reckoning” and “judgment”.

Reckoning is essentially calculation: the ability to manipulate data and recognise patterns. Judgment, on the other hand, refers to a form of “deliberative thought, grounded in ethical commitment and responsible action, appropriate to the situation in which it is deployed”.

Judgment, Smith observes, is not simply a way of thinking about the world, but emerges from a particular relationship to the world that humans have and machines do not. Humans are both embodied and embedded in the world. We are able to recognise the world as real and as unified but also to break it down into distinct objects and phenomena. We can represent the world but also appreciate the distinction between representation and reality. And, most importantly, humans possess an ethical commitment to the real over the representation. What is morally important is not the image or mental representation I have of you, but the fact that you exist in the world. A system with judgment must, Smith insists, not simply be able to think but also to “care about what it is thinking about”. It must “give a damn”. Humans do. Machines don’t.

AI may be able to triage patients because of its ability to recognise patterns that humans may miss, but no machine has an ethical commitment to patients in the way doctors and nurses do, for whom it’s not just the facts or patterns that matter. It is also that patients possess worth and dignity which need protecting.

Human judgments are not necessarily good, nor do humans always act with judgment. But all have the capacity to do so. That is why we hold humans, but not machines, as morally accountable for their acts.

The distinction between reckoning and judgment is important in assessing not just machines but humans, too. In this pandemic, one of the key responses from government minsters, when faced with difficult political questions, has been to suggest that they are merely “following the science”. Not only is there not a single “science” to follow, and that much of what we know about coronavirus is laced with uncertainty, but even if the science were clear, we still could not abjure judgment.

Take the debate about the reopening of schools. Suppose we knew for certain how likely it was for children to become infected, and to infect others, and what exactly the R number was in any part of the country. The decision on whether to reopen would still rest on making political and moral judgments about how to balance the risk of increased infection with the risk of children losing education and of the disadvantaged being further disadvantaged.

The best public policy rests on facts but it also requires us to choose between competing demands, and to decide how these fit into various ethical and political commitments. There can be good or bad judgment, but there can be no pretence that there is no judgment.

The “we’re following the science” line suggests that policymaking is a matter merely of reckoning – of making calculations from the given data – rather than also of judgment. It suggests, too, that a computer, rather than a human, would be best placed to lead the fight against Covid-19.

Machines cannot think like humans. Humans should not act like machines. Even politicians.

Kenan Malik is an Observer columnist

Most viewed

Most viewed