'Killer robots' are not science fiction - they're here

Campaign to Stop Killer Robots has one main message – all weapons must have meaningful human control


The Campaign to Stop Killer Robots is not looking to prevent the rise of Robocop. It has no plans to crush the (convincing yet CGI) robot from Bosstown Dynamics that turned on its masters in a viral video earlier this week. While these fictional androids serve to get us talking about the morality of developing fully autonomous systems capable of harming or even killing people, they make it seem like part of a distant dystopian future.

The reality is that lethal autonomous weapons systems are being developed and tested right now, and Prof Noel Sharkey (you may know him as a judge from the TV show Robot Wars) along with many others in this NGO collective say that it is vital to pre-emptively ban such weapons that are capable of selecting a target and applying violent or lethal force without any human in the loop.

Sharkey is emeritus professor of robotics and artificial intelligence at the University of Sheffield, where his focus over the years has shifted from robotics to robot ethics.

“The main aim is to get this idea across that all weapons must have meaningful human control. These are weapons that, once they’re launched, they go out and find their own targets and kill, maim or apply violent force to them without any human supervision.

READ MORE

“We’re not talking about the Terminator, we’re talking about tanks, submarines, ships and planes. I think some people think lethal autonomous weapons are the stuff of science fiction, but they actually just look like slightly futuristic conventional weapons,” says Sharkey.

“There’s the Armata T-14 Russian super-tank, for example, and that is absolutely massive. Military advisers tell me it looks at least 10 years ahead of any other tank on the planet. It can be remote controlled at the moment, but they’re rushing to make it fully autonomous so it can go out on its own.”

These slightly futuristic weapons require advanced technology to power them, and technology companies have a part to play in all of this. Several big players have skin in the game with military contracts to provide technology that might not necessarily be a lethal autonomous weapons system but has the potential to be used as part of future warfare.

Combat training

Despite employee backlash Microsoft is going ahead with its $479 million Hololens contract with the US army where its VR headsets will be used in combat training.

“We did not sign up to develop weapons, and we demand a say in how our work is used,” said an open letter from employees who claimed its use would “turn warfare into a simulated ‘video game’.”

The biggest backlash, however, came from Project Maven, a Pentagon project that saw Google ink a contract to provide AI and machine-learning technology for drone footage analysis. Thousands of Google employees signed a petition to stop their work being used for military ends, and the company promised not to renew the contract.

One of these employees was Laura Nolan, a software engineer working at its Dublin campus. Nolan left last year because of this, and has since become involved with Campaign to Stop Killer Robots.

“I was asked to help on a major project that would have involved some really big changes, and I asked why because it was obviously going to be very long and expensive. It turned out these were underpinnings for a later version of Project Maven. Probably also JEDI as well, which was the Joint Enterprise Defence Initiative’s cloud computing bid.

So how do you work on complex infrastructure and complex software systems for a company of that size and guarantee that you're not contributing to military ends?

“I was told it was to do with AI, drones, military analysis, that kind of thing. For me straightaway there were red flags. I didn’t get into tech to work on military projects, and I think the way the US in particular [engages] in warfare is quite problematic.

“My first thoughts were ‘I don’t want to do military work; I don’t want to do anything related to drones. I think this whole thing sounds a bit dodgy, and aren’t we Google? We’re the company who always made this big noise about organising the world’s information to make it useful and accessible – not for killing people’.”

Uneasy

What was jarring to Nolan and others was that Google's previous position had been to explicitly oppose such contracts, even going as far as ridding itself of an innocuous contract for running a DARPA robotics competition when it acquired Boston Dynamics.

After been asked not to make any noise about Maven she did so, but made it clear she was strongly opposed to the whole thing and was feeling increasingly uneasy. This came to a head in early 2018 when it was leaked internally, and following a series of emergency meetings Google executives had to address many unhappy employees.

“I was hoping that something would happen, that Google would actually pull back on it or something. They did say they wouldn’t implement the second phase of Maven. A lot of people think it was cancelled ages ago but it was an 18-month project scheduled to end in March 2019. So up until then as far as I know Google was still doing this work,” says Nolan.

This is where ethics boards come in handy, but if they are designed in a non-transparent way that gives the appearance of adherence to ethical guidelines while not having to comply with anything or be accountable to an external committee, then there appears to be multiple technology companies that are having their cake and eating it. Technology and human rights researcher Ben Wagner calls this "ethics washing".

“Google’s leads’ position on [its ethical stance] was very interesting,” says Nolan. “They said ‘here are our ethical principles’ but they were vague enough that nobody could really tell if Maven complied with them or not.

“And I had said before, I bet that a) they won’t clearly exclude Maven, and b) there will be a carve-out for surveillance. And lo, there was indeed a carve-out for surveillance, and Maven is primarily a surveillance project. So to my mind you could read those principles and conclude that Maven is perfectly well within scope.”

Signing a petition

If you are opposed to working for a tech company that has a military contract, staying and criticising from within can work. The Campaign to Stop Killer Robots suggests signing a petition, donating to the campaign, writing a letter to your local politician, asking questions related to this at your company’s next meeting or even becoming a whistle-blower.

“The problem with protesting from within is that at a certain point you might conclude that it is ineffective. The nature of tech these days is that it’s very hard to work in one of these big cloud platform companies and say, ‘okay, I’ll work on this stuff over here, but I won’t work on the parts that are being used in military work, right’?

“So how do you work on complex infrastructure and complex software systems for a company of that size and guarantee that you’re not contributing to military ends? I don’t think you can. That ultimately is one of the reasons why I left. I no longer trusted Google’s leadership to work on things that I ethically approve of,” says Nolan.

After leaving Google the first thing Nolan did with the Campaign to Stop Killer Robots was go to the Convention on Conventional Weapons (CCW) at the United Nations in Geneva.

“Like other conventions it’s a whole set of protocols or treaties that various states are signed up to. It’s basically a disarmament club of around 120 states that meets at the UN on a regular basis and deals with ‘conventional’ weapons: things that explode, projectiles and lasers.”

The reason the campaign is interested in getting involved at this level is that it has the potential to affect change: in 2017 the CCW managed to added provisions to ban blinding laser weapons.

“That’s something that we really like to talk about in the lethal autonomous weapons world because the laser ban treaty was how we hope the autonomous weapons ban treaty will be: it was pre-emptive.”

One drawback of the CCW is that rather than ruling by majority vote it is consensus-based. It requires all 120-odd members to be on the same page, which is difficult when some members are part of the meetings to discuss lethal autonomous weapons but may not want a treaty to prohibit their development and use.

"Russia don't seem to want a treaty on this. There is what they're saying and their actions. Everyone is pretty convinced that Russia is investing in these systems and Kalishnakov are building systems that are fairly autonomous."

Suicide drone

Earlier this year, for example, the Russian arms manufacturer unveiled the KUB-UAV, which it describes as “a high-precision attack unmanned aerial system”. It has been dubbed a “suicide drone” due to its primary function: destroying remote targets from up to 64km away by blowing itself up.

And it isn't just Russia. China, Israel, Australia, Turkey, the US and the UK are also developing these kinds of weapons. All of this is happening at an alarming speed, says Sharkey, who reckons the US is probably still ahead of the game.

One of the big problems here is what we call automation bias: people come to trust the technology

"We've got this problem at the UN where most of the nations might support the idea of having a principle of human control for a weapon, and the International Committee of the Red Cross, who preserve the Geneva conventions, are pushing the idea that all weapons must be fully human-controlled, but there's a lot of argument around what the term 'human control' means."

This is what the Campaign to Stop Killer Robots is moving towards: an agreement over what it means to have meaningful human control of these advanced weapons. The idea is that such weapons will always have a human in the loop rather than a machine deciding on its own not only to acquire a target but to carry out a hit.

“What some people call ‘control’ could be pressing a button to launch the system. Or it could be having a right to veto the weapon – having a few seconds to call it back,” says Sharkey.

“One of the big problems here is what we call automation bias: people come to trust the technology. It just tells you ‘this is a target’ and you press the button and go with it because it’s been right several times.”

There is the obvious question: if the technology is sufficiently advanced what could go wrong? We have been told AI is now more accurate than humans at expert tasks including reviewing legal contracts, detecting heart attacks, even creating machine-learning software. Why not warfare too?

“Apart from the obvious – that an autonomous weapon could be hacked and used for other means – there is the complex systems’ theory problem that states these systems, with multiple interacting parts and feedback loops, can be very unstable,” says Nolan.

“An analogy I came across in Paul Sharre’s book Army of None is the stock market where algorithmic trading is done by trading company bots. Things called flash crashes happen fairly regularly: these bots will get into a loop and all start selling, and the price of the particular stock or commodity will take a nosedive really fast.”

Flash war

In autonomous warfare robots on the battlefield could encounter an unanticipated scenario in their interactions with people or other robots, and this could lead to a “flash war”. When flash crashes occur the stock exchange has a time-out mechanism in place to take the bots out of action for a while, but this would not be possible with autonomous lethal weapons before a lot of damage is potentially done and very fast.

We know the UK are developing autonomous weapons. Could we see autonomous weapons at the Border in the North? Maybe

It seems like a far-removed concept to those of us living in Ireland, and, as Nolan points out, the brunt of whatever damage autonomous warfare may inflict will inevitably and unfortunately be experienced first by people in conflict zones in poorer parts of the world. Yet we also need to consider autonomous robots in policing.

“We’re in a situation where we don’t know what’s going to happen with the Border in the North, and within our lifetimes that has been policed by military. We know the UK are developing autonomous weapons. Could we see autonomous weapons at the Border in the North? Maybe.”

Ireland’s position on lethal autonomous weapons is that they are bad, but Nolan says Irish diplomats have not pushed for a treaty. Right now around 30 countries are calling for a treaty in the CCW to prohibit such weapons, and Ireland is not part of this.

"We would really like Ireland to add its voice to this. Belgium are considering a national law to ban the use of these lethal autonomous weapons, and I would love to see Ireland do the same. We do have a defence force, and I would love to see them commit to not using these weapons."

Yet what it boils down to, says Sharkey, is two critical functions: selecting a target and firing a weapon or applying violent force.

“Everything else can be autonomous but you need a person to decide upon the legitimacy of every target. I think that seems like a very reasonable approach, don’t you?”