Skip to main contentSkip to navigationSkip to navigation
Illustration by Dom McKenzie.
Illustration by Dom McKenzie.
Illustration by Dom McKenzie.

From viral conspiracies to exam fiascos, algorithms come with serious side effects

This article is more than 3 years old

A mesmerising, unaccountable kind of algorithm – machine learning – is blinding governments to the technology’s often disastrous flaws

Will Thursday 13 August 2020 be remembered as a pivotal moment in democracy’s relationship with digital technology? Because of the coronavirus outbreak, A-level and GCSE examinations had to be cancelled, leaving education authorities with a choice: give the kids the grades that had been predicted by their teachers, or use an algorithm. They went with the latter.

The outcome was that more than one-third of results in England (35.6%) were downgraded by one grade from the mark issued by teachers. This meant that a lot of pupils didn’t get the grades they needed to get to their university of choice. More ominously, the proportion of private-school students receiving A and A* was more than twice as high as the proportion of students at comprehensive schools, underscoring the gross inequality in the British education system.

What happened next was predictable but significant. A lot of teenagers, realising that their life chances had just been screwed by a piece of computer code, took to the streets. “Fuck the algorithm” became a popular slogan. And, in due course, the government caved in and reversed the results – though not before a lot of emotional distress and administrative chaos had been caused. And then Boris Johnson blamed the fiasco on “a mutant algorithm” which, true to form, was a lie. No mutation was involved. The algorithm did what it said on the tin. The only mutation was in the behaviour of the humans affected by its calculations: they revolted against what it did.

Quick Guide

The use and abuse of algorithms

Show

Finance

Algorithms are widely used to accept and reject applications for loans and other financial products. Egregious discrimination is widely thought to occur. For example, in 2017 Apple co-founder Steve Wozniak found that when he applied for an Apple Card he was offered borrowing 10 times that of his wife although they shared various bank accounts and other credit cards. Apple’s partner for the card, Goldman Sachs, denied they made decisions based on gender.

Policing

Software is used to allocate policing resources on the ground and to predict how likely an individual is to commit or be a victim of a crime. Last year, a Liberty study found at least 14 UK police forces have plans to use crime prediction software. Such software is criticised for creating self-fulfilling crime patterns, ie sending officers to areas where crimes have occurred before, and the discriminatory profiling of ethnic minorities and low-income communities.

Social work

Local councils used "predictive analytics" to highlight particular families for the attention of child services. A 2018 Guardian investigation found that Hackney, Thurrock, Newham, Bristol and Brent councils were developing predictive systems either internally or by hiring private software companies. Critics warn that, aside from concerns about the vast amounts of sensitive data they contain, these systems incorporate the biases of their designers and risk perpetuating stereotypes.

Job applications

Automated systems are increasingly used by recruiters to whittle down pools of jobseekers, invigilate online tests and even interview candidates. Software scans CVs for keywords and generates a score for each applicant. Higher-scoring candidates may be asked to perform online personality and skills tests, and ultimately the first round of interviews may be carried out by bots which that use software to analyse facial features, word choices and vocal indicators to decide whether a candidate advances. Each of these stages is based on dubious science and may discriminate against certain traits or communities. Such systems learn bias and tend to favour the already advantaged.

Offending

Algorithms that access a criminal’s chances of reoffending are widely used in the US. A ProRepublica investigation of the Compas recidivism software found that black defendants were often predicted to be at a higher risk of reoffending than they actually were and white defendants were often predicted to be less risky than they were. In the UK, Durham police force has developed the Harm Assessment Risk Tool (Hart) to predict whether suspects are at risk of offending. The police have refused to reveal the code and data upon which the software makes its recommendations.

Was this helpful?

And that was a genuine first – the only time I can recall when an algorithmic decision had been challenged in public protests that were powerful enough to prompt a government climbdown. In a world increasingly – and invisibly – regulated by computer code, this uprising might look like a promising precedent. But there are several good reasons, alas, for believing that it might instead be a blip. The nature of algorithms is changing, for one thing; their penetration into everyday life has deepened; and whereas the Ofqual algorithm’s grades affected the life chances of an entire generation of young people, the impact of the dominant algorithms in our unregulated future will be felt by isolated individuals in private, making collective responses less likely.

According to the Shorter Oxford Dictionary, the word “algorithm” – meaning “a procedure or set of rules for calculation or problem-solving, now esp with a computer” – dates from the early 19th century, but it’s only comparatively recently that it has penetrated everyday discourse. Programming is basically a process of creating new algorithms or adapting existing ones. The title of the first volume, published in 1968, of Donald Knuth’s magisterial five-volume The Art of Computer Programming, for example, is “Fundamental Algorithms”. So in one way the increasing prevalence of algorithms nowadays simply reflects the ubiquity of computers in our daily lives, especially given that anyone who carries a smartphone is also carrying a small computer.

The Ofqual algorithm that caused the exams furore was a classic example of the genre, in that it was deterministic and intelligible. It was a program designed to do a specific task: to calculate standardised grades for pupils based on information a) from teachers and b) about schools in the absence of actual examination results. It was deterministic in the sense that it did only one thing, and the logic that it implemented – and the kinds of output it would produce – could be understood and predicted by any competent technical expert who was allowed to inspect the code. (In that context, it’s interesting that the Royal Statistical Society offered to help with the algorithm but withdrew because it regarded the non-disclosure agreement it would have had to sign as unduly restrictive.)

Classic algorithms are still everywhere in commerce and government (there’s one currently causing grief for Boris Johnson because it’s recommending allowing more new housing development in Tory constituencies than Labour ones). But they are no longer where the action is.

Since the early 1990s – and the rise of the web in particular – computer scientists (and their employers) have become obsessed with a new genre of algorithms that enable machines to learn from data. The growth of the internet – and the intensive surveillance of users that became an integral part of its dominant business model – started to produce torrents of behavioural data that could be used to train these new kinds of algorithm. Thus was born machine-learning (ML) technology, often referred to as “AI”, though this is misleading – ML is basically ingenious algorithms plus big data.

Machine-learning algorithms are radically different from their classical forebears. The latter take some input and some logic specified by the programmer and then process the input to produce the output. ML algorithms do not depend on rules defined by human programmers. Instead, they process data in raw form – for example text, emails, documents, social media content, images, voice and video. And instead of being programmed to perform a particular task they are programmed to learn to perform the task. More often than not, the task is to make a prediction or to classify something.

This has the implication that ML systems can produce outputs that their creators could not have envisaged. Which in turn means that they are “uninterpretable” – their effectiveness is limited by the machines’ current inability to explain their decisions and actions to human users. They are therefore unsuitable if the need is to understand relationships or causality; they mostly work well where one only needs predictions. Which should, in principle, limit their domains of application – though at the moment, scandalously, it doesn’t.

Illustration by Dom McKenzie.

Machine-learning is the tech sensation du jour and the tech giants are deploying it in all their operations. When the Google boss, Sundar Pichai, declares that Google plans to have “AI everywhere”, what he means is “ML everywhere”. For corporations like his, the attractions of the technology are many and varied. After all, in the past decade, machine learning has enabled self-driving cars, practical speech recognition, more powerful web search, even an improved understanding of the human genome. And lots more.

Because of its ability to make predictions based on observations of past behaviour, ML technology is already so pervasive that most of us encounter it dozens of times a day without realising it. When Netflix or Amazon tell you about interesting movies or goods, that’s ML being deployed as a “recommendation engine”. When Google suggests other search terms you might consider, or Gmail suggests how the sentence you’re composing might end, that’s ML at work. When you find unexpected but possibly interesting posts in your Facebook newsfeed, they’re there because the ML algorithm that “curates” the feed has learned about your preferences and interests. Likewise for your Twitter feed. When you suddenly wonder how you’ve managed to spend half an hour scrolling through your Instagram feed, the reason may be that the ML algorithm that curates it knows the kinds of images that grab you.

The tech companies extol these services as unqualified public goods. What could possibly be wrong with a technology that learns what its users want and provides it? And at no charge? Quite a lot, as it happens. Take recommendation engines. When you watch a YouTube video you see a list of other videos that might interest you down the right-hand side of the screen. That list has been curated by a machine-learning algorithm that has learned what has interested you in the past, and also knows how long you’ve spent during those previous viewings (using time spent as a proxy for level of interest). Nobody outside YouTube knows exactly what criteria the algorithm is using to choose recommended videos, but because it’s basically an advertising company, one criterion will definitely be: “maximise the amount of time a viewer spends on the site”.

In recent years there has been much debate about the impact of such a maximisation strategy. In particular, does it push certain kinds of user towards increasingly extremist content? The answer seems to be that it can. “What we are witnessing,” says Zeynep Tufekci, a prominent internet scholar, “is the computational exploitation of a natural human desire: to look ‘behind the curtain’, to dig deeper into something that engages us. As we click and click, we are carried along by the exciting sensation of uncovering more secrets and deeper truths. YouTube leads viewers down a rabbit hole of extremism, while Google racks up the ad sales.”

What we have also discovered since 2016 is that the micro-targeting enabled by ML algorithms deployed by social media companies has weakened or undermined some of the institutions on which a functioning democracy depends. It has, for example, produced a polluted public sphere in which mis- and disinformation compete with more accurate news. And it has created digital echo-chambers and led people to viral conspiracy theories such as Qanon and malicious content orchestrated by foreign powers and domestic ideologues.

The side-effects of machine-learning within the walled gardens of online platforms are problematic enough, but they become positively pathological when the technology is used in the offline world by companies, government, local authorities, police forces, health services and other public bodies to make decisions that affect the lives of citizens. Who should get what universal benefits? Whose insurance premiums should be heavily weighted? Who should be denied entry to the UK? Whose hip or cancer operation should be fast-tracked? Who should get a loan or a mortgage? Who should be stopped and searched? Whose children should get a place in which primary school? Who should get bail or parole, and who should be denied them? The list of such decisions for which machine-learning solutions are now routinely touted is endless. And the rationale is always the same: more efficient and prompt service; judgments by impartial algorithms rather than prejudiced, tired or fallible humans; value for money in the public sector; and so on.

The overriding problem with this rosy tech “solutionism” is the inescapable, intrinsic flaws of the technology. The way its judgments reflect the biases in the data-sets on which ML systems are trained, for example – which can make the technology an amplifier of inequality, racism or poverty. And on top of that there’s its radical inexplicability. If a conventional old-style algorithm denies you a bank loan, its reasoning can be explained by examination of the rules embodied in its computer code. But when a machine-learning algorithm makes a decision, the logic behind its reasoning can be impenetrable, even to the programmer who built the system. So by incorporating ML into our public governance we are effectively laying the foundations of what the legal scholar Frank Pasquale warned against in his 2016 book The Black Box Society.

In theory, the EU’s General Data Protection Regulation (GDPR) gives people a right to be given an explanation for an output of an algorithm – though some legal experts are dubious about the practical usefulness of such a “right”. Even if it did turn out to be useful, though, the bottom line is that injustices inflicted by a ML system will be experienced by individuals rather than by communities. The one thing machine learning does well is “personalisation”. This means that public protests against the personalised inhumanity of the technology are much less likely – which is why last month’s demonstrations against the output of the Ofqual algorithm could be a one-off.

In the end the question we have to ask is: why is the Gadarene rush of the tech industry (and its boosters within government) to deploy machine-learning technology – and particularly its facial-recognition capabilities – not a major public policy issue?

The explanation is that for several decades ruling elites in liberal democracies have been mesmerised by what one can only call “tech exceptionalism” – ie the idea that the companies that dominate the industry are somehow different from older kinds of monopolies, and should therefore be exempt from the critical scrutiny that consolidated corporate power would normally attract.

The only consolation is that recent developments in the US and the EU suggest that perhaps this hypnotic regulatory trance may be coming to an end. To hasten our recovery, therefore, a thought experiment might be helpful.

Imagine what it would be like if we gave the pharmaceutical industry the leeway that we currently grant to tech companies. Any smart biochemist working for, say, AstraZeneca, could come up with a strikingly interesting new molecule for, say, curing Alzheimer’s. She would then run it past her boss, present the dramatic results of preliminary experiments to a lab seminar after which the company would put it on the market. You only have to think of the Thalidomide scandal to realise why we don’t allow that kind of thing. Yet it is exactly what the tech companies are able to do with algorithms that turn out to have serious downsides for society.

What that analogy suggests is that we are still at the stage with tech companies that societies were in the era of patent medicines and snake oil. Or, to put it in a historical frame, we are somewhere between 1906, when the Pure Food and Drug Act was passed by the US Congress, and 1938, the year Congress passed the Federal Food, Drug, and Cosmetic Act, which required that new drugs show safety before selling. Isn’t it time we got a move on?

John Naughton chairs the advisory board of the new Minderoo Centre for Technology and Democracy at the University of Cambridge

Comments (…)

Sign in or create your Guardian account to join the discussion

Most viewed

Most viewed