Skip to main contentSkip to navigationSkip to navigation
A NAO humanoid robot, developed by Softbank Corp. subsidiary Aldebaran Robotics SA, performs during a demonstration in Tokyo, Japan, on Wednesday, Jan. 28, 2015. Mitsubishi UFJ Financial Group Inc. unveiled the 58-centimeter (23-inch) humanoid on Monday to improve services for customers in Japan and become the first bank in the world to use robots at branches, it said. Photographer: Kiyoshi Ota/Bloomberg via Getty Images
A humanoid robot, developed by Softbank Corp. subsidiary Aldebaran Robotics SA. Photograph: Bloomberg/Getty Images
A humanoid robot, developed by Softbank Corp. subsidiary Aldebaran Robotics SA. Photograph: Bloomberg/Getty Images

A ban on autonomous weapons is easier said than done

This article is more than 8 years old

Stephen Hawking, Elon Musk, Steve Wozniak and artificial intelligence researchers published a letter calling for a ban on autonomous weapons. This is an easy first step. A ban that works in practice will be much harder.


To coincide with a major Artificial Intelligence (AI) conference in Buenos Aires this week, leading scientists, world-renowned philosophers and technology investors signed a letter that urges a ban on weapons that use artificial intelligence technology. We have added our names to the sixteen thousand (and rising) signatures. We are not in favour of automated weapons that make the decision to kill someone. As reported in the Guardian yesterday, the researchers that drafted this letter think that autonomous cars already include the technical capacities required to do this.

But signing up to the letter is the easy part. The history of global technology regulation warns us that making this kind of statement is much easier than realising what it asks for. It can be difficult to to work out exactly what to ban and to make a ban stick. It is even harder to design a smart moratorium on technology - one that reflects the motivations behind the open letter published this week.

Does a ban make sense in practice?

For 50 years, online security software was classed as a munition by the US Government thanks to the major role cryptographic programmes played in World War II. This meant the export of this software was hugely restricted. These conditions were then relaxed in the 1990s. Online security software had become important far beyond the military and the ban was holding back new industries and improvements to the security of an ever-growing number of online transactions.

The problem with restricting the availability of software is that it affects all potential uses, including ones that cannot be predicted decades in advance. The thousands of scientists that have signed the letter to ban military use of AI may have inadvertently created restrictions on their own ability to share software with international collaborators or develop future products.

As Patrick Lin, director of the Ethics & Emerging Sciences Group at California Polytechnic State University, told io9.com: “Any AI research could be co-opted into the service of war, from autonomous cars to smarter chat-bots... It’s a short hop from innocent research to weaponization.”

The tension between dual uses of technology - for hazard and for good - is particularly difficult to manage when the exact same technology can be used in a wide and unpredictable range of ways.

More work needed to imagine future uses of AI

Larry Lessig argues that “code is law”. Or, more generally, as Langdon Winner put it in the 1970s, “technology is legislation”. The way a technology is constructed - particularly the systems of interaction with humans it creates - bakes in a specific way of working. This frames the kind of decisions we can make, and the kind of control we can have over that technology. For example, the way a computer algorithm is written determines the points at which a human can direct it. The Bureau of Investigative Journalism’s description of the complex protocols between drone pilots and their civilian back-up team illustrates this in detail in a military context. It makes clear how the points of human intervention constructed in an otherwise automated process direct how and when people are involved. In these situations, technology can become legislation by proxy. We should try to control technology precisely because technology ends up controlling us.

We should worry about technology that controls us, but that is not the same as resigning ourselves to technological determinism. Kevin Kelly argues that prohibition of technology is futile because it has a life of its own. But technologies, as Kelly should know given his proximity to their development, are far from inevitable. There are choices to be made, by innovators, consumers and citizens, about what gets made and what gets used.

Worries about building automated weapons can be addressed directly by all of these people and not an abstracted group of military technology experts. More detailed discussion, and fundamentally more work, needs to be done to imagine and reimagine the use of technology as it is developed, putting in safeguards from the start.

We need activist regulators as well as activist researchers

There are already attempts at a kind of sophisticated technology regulation by anticipation. The FBI’s approach to synthetic biology research is case by case rather than a blanket policy. A specialist group of agents visit labs around the US, helping scientists think through the potential consequences of their work. This avoids indiscriminate bans on specific technologies or techniques.

But this kind of interventionist approach can start to stray from the legitimate territory of a regulator and can reduce their effectiveness. There are huge uncertainties around the potential use of new kinds of biology. What if the FBI agent and scientists differ in their opinion about the risk of a particular new technique or genetic manipulation? How far into the unknown future can a regulator count as their jurisdiction?

Perhaps the most relevant template for AI scientists today is the 40 year old Biological Weapons Convention (BWC), prohibiting the development or ownership of biological weapons. The case for the continued importance of this international treaty was made on this blog in March. The reason that the treaty continues to work well is thanks to the hard work of people like Piers Millet, until recently in the BWC implementation unit. Piers is an active part of the global community of academics and political organisations debating the developing biological threats and the best way to respond to them. As a regulator, he refused to stay behind a UN desk in Switzerland.

It is these unsung heroes that can make the difference to whether a ban or restriction on a technology is smart enough to make it worthwhile. The open AI letter points to the parallels with the BWC:

Just as most chemists and biologists have no interest in building chemical or biological weapons, most AI researchers have no interest in building AI weapons — and do not want others to tarnish their field by doing so, potentially creating a major public backlash against AI that curtails its future societal benefits.

But it doesn’t go as far as to make suggestions about how this works in practice, and who will play a role. It’s great that International Committee for Robot Arms Control (ICRAC) representatives like Mark Bishop have signed up too. They have been involved in UN discussions about automated weapons for at least two years already. Maybe this committee is the starting place for the next stage of action.

But signatures from technology leaders in companies like from Apple and Google bring a new level of clout and visibility to this argument. And they could add pressure by reflecting the terms of the letter in their own organisations - setting up ethics and governance oversight in a way they haven’t so far.

The BWC implementation team support the work that needs to be done to properly manage an international ban on a technology. Piers also brought an outside voice to discussions about how to govern potential biological weapons - a check on the bias created by a community involved or inspired by the development of a particular technology.

The problem with self-governing science

A global moratorium on geoengineering has been mooted at various times over the last decade. Research that engineers changes to the atmosphere and climate at a large scale is risky enough to warrant a wider debate before it goes ahead. Scientists developing these techniques understand this, and held a conference at Asilomar in 2010 to discuss the governance of their work. This follows the example of geneticists, who held a similar meeting as the discipline started to take off in in the 1970s, including a self-imposed moratorium on research for two years while they decided on how to balance the benefits of their advancing science with public fears.

The geoengineering Asilomar meeting was part of a series of attempts to construct a self-governing body for this research. But no matter how well-intentioned these efforts are, there are limits to perspectives that self-governance brings to debates about the ethics of developing new technology.

Accounts from the time of the original Asilomar meeting provide evidence that many scientists then saw self-governance as a way to avoid heavy regulation rather than as the best way to reflect their sense of responsibility. This will, obviously, have affected the kinds of regulatory options they were willing to consider as part of their discussions. Coupled with pressures from biotechnology investors ready to pounce on any new development, this could add up to a kind of unspoken, unchallenged mutual bias.

As well as these unspoken external pressures, the internal norms of science unhelpfully narrowed the acceptable topics of conversation at Asilomar. Sheila Jasanoff, J. Benjamin Hurlbut and Krishanu Saha said on this blog in April: “Asilomar offers an easy recipe for public policy: a research moratorium followed by an expert assessment of which risks are acceptable and which warrant regulation.” They argue that this way of working can avoid addressing issues that become key to public debate in years to come - like environmental release of engineered organisms or ethical aspects of human genetic engineering. The technical debates at Asilomar were too narrow to cover the issues that become most important over time.

The AI letter this week betrays the biases inherent in a group thinking about restricting very the thing they work on. It talks of wanting to avoid “potentially creating a major public backlash against AI that curtails its future societal benefits”. One of the justifications for taking a stand against autonomous weapons is building a smoother path for the continuation of AI research. This group will prefer options that allow them to continue their research even if that reduces the effectiveness of any ban on AI as a weapon. The history of Asilomar tells us that they risk not addressing issues that become the most acute as AI continues to develop.

From biologists in the 1970s to geoengineers today, there are groups of scientists that have taken a stand against the misuse of the technology they develop. But by focusing on large scale hazards, they can miss the potency in local influences on the direction of their research - from the desire to avoid overzealous legislation to pressure from their business affiliations. If this growing community around AI is to avoid this pitfall, they need to go beyond a second column for non-AI expert signatories for their letter. There need to be a permanent, challenging voice helping to develop global governance for AI technology. This will not just help them turn this week’s worthwhile call into action, but turn it into the kind of action that is doesn’t just serve the more subtle pressures on today’s AI community.

The 12th and 16th paragraphs of this blogpost were amended on 3rd August to reflect the fact that Piers Millet recently left the BWC Implementation Support Unit.

Most viewed

Most viewed