Advertisement

SKIP ADVERTISEMENT

At War

Are Killer Robots the Future of War? Parsing the Facts on Autonomous Weapons

Milrem Robotic’s Tracked Hybrid Modular Infantry System at the Changi exhibition centre in Singapore in February 2016.Credit...Roslan Rohman/Agence France-Presse — Getty Images

It’s a freezing, snowy day on the border between Estonia and Russia. Soldiers from the two nations are on routine border patrol, each side accompanied by an autonomous weapon system, a tracked robot armed with a machine gun and an optical system that can identify threats, like people or vehicles. As the patrols converge on uneven ground, an Estonian soldier trips and accidentally discharges his assault rifle. The Russian robot records the gunshots and instantaneously determines the appropriate response to what it interprets as an attack. In less than a second, both the Estonian and Russian robots, commanded by algorithms, turn their weapons on the human targets and fire. When the shooting stops, a dozen dead or injured soldiers lie scattered around their companion machines, leaving both nations to sift through the wreckage — or blame the other side for the attack.

The hypothetical scenario seems fantastical, but those battlefield robots already exist today in an early form. Milrem Robotics, a company based in Estonia, has developed a robot called THeMIS (Tracked Hybrid Modular Infantry System), which consists of a mobile body mounted on small tank treads, topped with a remote-weapon turret that can be equipped with small or large-caliber machine guns. It also includes cameras and target-tracking software, so the turret can pursue people or objects as programmed. This is a human-controlled system for now (and Milrem, for its part, insists that it will remain that way), but the components are there for a robot that can interpret what it sees, identify likely combatants and target them, all on its own. “The possible uses for the THeMIS,” the robot’s builders gush on the website, “are almost limitless.”

[Sign up for the weekly At War newsletter to receive stories about duty, conflict and consequence.]

The decision to use a lethal weapon in battle against combatants has always been a decision made by a human being. That may soon change. Modern advancements in artificial intelligence, machine image recognition and robotics have poised some of the world’s largest militaries on the edge of a new future, where weapon systems may find and kill people on the battlefield without human involvement. Russia, China and the United States are all working on autonomous platforms that pair weapons with sensors and targeting computers; Britain and Israel are already using weapons with autonomous characteristics: missiles and drones that can seek and attack an adversary’s radar, vehicle or ship without a human command triggering the immediate decision to fire.

Under what circumstances can and should militaries delegate the decision to take a human life to machines? It’s a moral leap that has raised fundamental questions about the nature of warfare and that military planners, human rights organizations, defense officials, research analysts and ethicists have yet to reach a consensus on.

The technology for weapons systems to identify and acquire targets independently has existed in basic form for several decades. In the 1980s and ’90s, United States war planners experimented with Harpoon and Tomahawk missiles that could identify targets independently; both are in use today, albeit with human oversight.

The most advanced automated American weaponry has focused on defensive applications. When rocket and mortar attacks threatened large American bases in Iraq in 2003, the Army developed the Counter-Rocket, Artillery and Mortar system (C-RAM) — a rapid-fire 20 mm cannon that can identify an incoming airborne threat, alert a human operator and — with the operator’s press of a button — track and destroy it with a burst of special ammunition that self-destructs in midair to minimize damage to friendly personnel or civilians below.

Image
Airmen in Afghanistan preparing an MQ-9 Reaper for a mission in 2016.Credit...Josh Smith/Reuters

Those weapons were based on a naval gun system, the Phalanx, that’s considered a last line of defense against antiship missiles. The Phalanx was just one of the Navy’s automated answers to late Cold War pressures: Soviet naval doctrine relied on overwhelming enemy ship squadrons with as many as 60 cruise missiles at a time. “When large salvos come in, it’s impossible for humans to be able to say, ‘O.K., you’ve got to take this missile out first,’” says Robert Work, a senior fellow at the Center for a New American Security in Washington. “There was no way for humans in the combat information center to be able to keep up with that.” So American planners developed Phalanx and the Aegis Combat System, which links sensors on fleet ships and aircraft to identify airborne threats and, with operator input, automatically attack them with shipboard missiles. It was programmed, Work says, “to have a totally automatic setting, and literally the human at some point pushes the button and the machine makes all the decisions.”

It depends on how autonomous they are. “There’s a type of fire-and-forget weapon where the weapon itself decides, ‘O.K., this is what I see happening in the battlefield, and I think I’m going to have to take out this particular tank because I think this tank is the command tank,’” says Work. This is his definition of a true lethal autonomous weapon system: an independent weapon that decides everything about who and what it destroys once a human has unleashed it.

As deputy secretary of defense between 2014 and 2017, Work was responsible for carrying out the Pentagon’s Third Offset Strategy, a plan to counter potential adversaries’ numerical advantages by putting innovative technologies at the core of United States military doctrine. The United States’ first offset, in President Eisenhower’s day, built on America’s nuclear advantage; a second offset in the 1970s and ’80s emphasized the nation’s advances in computers and missile-guidance technology. For Work, the third offset meant leveraging artificial intelligence and machine autonomy to create a smarter, faster network integrating humans and machines. It also meant watching how other states and nonstate actors developed their own autonomous capabilities, from expendable unmanned aerial vehicles to tanks or missile batteries augmented by artificial intelligence.

Darpa (the Defense Advanced Research Projects Agency) has a program called CODE, or Collaborative Operations in Denied Environment, to design sophisticated software that will allow groups of drones to work in closely coordinated teams, even in places where the enemy has been able to deny American forces access to GPS and other satellite-based communications. CODE “is not intended to create autonomous weapons,” says Paul Scharre, author of “Army of None: Autonomous Weapons and the Future of War,” but rather adapting to “a world where we’ll have groups of robots operating collaboratively together under one person in supervisory control. The program manager has compared it to wolves hunting in coordinated packs.”

CODE’s human operators monitor the swarm without micromanaging it, and the autonomy of the drones means that they are programmed to improvise and adjust as they pursue their preset mission. “The idea here is that CODE is going after mobile or rapidly relocatable targets, so the target locations cannot be specified precisely in advance by humans,” Scharre says. “It’s not like a Tomahawk cruise missile, where you just program in the coordinates and then the missile goes and strikes it. The drones have to be able to search an area and find targets on the move.”

A simpler autonomous system is the loitering munition: a drone that can fly for some time on its own, looking for specific signals, before finding a target and crashing into it with an explosive payload. Israel produces a loitering munition, dubbed the “Harpy,” which is designed to hunt out and destroy enemy radar stations and which became a sticking point in U.S.-Israeli relations when some were sold to China in the late 1990s. Companies in several nations, including Slovakia and the United States, have also produced loitering munitions.

“Imagine that we are fighting in a city and we have a foe that is using human life indiscriminately as a human shield,” says Tony Cerri, who until recently oversaw data science, models and simulations at the U.S. Army Training and Doctrine Command. “It’s constantly in your face as you’re out walking in the street. You can’t deal with every situation. You are going to make a mistake.”

Image
Images from a computer showing a strike by a Brimstone missile, a British weapon, on an Islamic State armed truck in Iraq.Credit...Ministry of Defense/Crown Copyright, via Associated Press

Autonomous weapons, he suggests, are much less likely to make such mistakes: “A robot, operating with milliseconds, looking at data that you can’t even begin to conceive, is going to say this is the right time to use this kind of weapon to limit collateral damage.” This argument parallels the controversial case for self-driving cars. In both instances, sensor-rich machines navigate a complex environment without the fatigue, distractions and other human fallibilities that can lead to fatal mistakes. Yet both arguments discount the emergent behaviors that can come from increasingly intelligent machines interpreting the world differently from humans.

In autonomous cars, those new behaviors have already had deadly consequences, like when a Tesla Model S in “autopilot” mode in Florida failed to recognize a tractor-trailer crossing the highway ahead and drove under it, killing the car’s owner. That crash, in 2016, was the first of two fatal accidents on highways in the United States involving a Tesla on autopilot. Uber temporarily pulled its own self-driving cars from the roads last March, when one killed a pedestrian walking her bike across a street in Arizona.

Even advocates concede that autonomous military robots could behave in new and unforeseen ways, inducing new errors — the way trading algorithms unleashed on the stock market can cause flash crashes or a pair of dueling book resellers’ algorithms on Amazon could end up pricing a $70 biology text at $23 million, which happened in 2011. Those possibilities — and the danger that lethal machine errors could precipitate bloodier conflicts — will require close cooperative measures between robot-armed nations. Delegations to the United Nations have begun discussions on regulating lethal autonomous weapon systems under an existing Convention on Certain Conventional Weapons, which governs things ranging from booby traps to blinding lasers.

In August, a United Nations working group set down principles that could be used to guide future international regulations on lethal autonomy. Chief among them is the idea of ensuring human responsibility for any weapon throughout its life cycle, which could create new liabilities for arms manufacturers. The United Nations group is set to release a formal report, with recommendations for future action, this month. Besides governments, the group’s meetings attract their share of activists, like Mary Wareham of Human Rights Watch, who is the global coordinator for the Campaign to Stop Killer Robots, an initiative launched in 2013 with the explicit mission of keeping humans in charge of lethal decisions in war.

“We’ve focused on two things that we want to see remain under meaningful, or appropriate, or adequate, or necessary human control,” Wareham says. “That’s the identification and selection of targets and then the use of force against them, lethal or otherwise.” Those are the key decision points, critics say, where only a human’s judgment — capable of discriminating between enemies and civilians, and keeping a sense of proportionality in responding — can be accountable and satisfy the conventions of war.

The way in which we tend to justify war or condemn certain acts “has to do with the fact that human beings are making decisions that are either moral or immoral,” says Pauline Shanks Kaurin, a professor specializing in military ethics at the U.S. Naval War College. “If humans aren’t making those decisions, if it’s literally machines against machines, then it seems like then it’s something else, right?”

It’s the ways in which robotic firepower can be harnessed by humans that have spurred some of autonomous weaponry’s sharpest critics. In 2015, Steve Wozniak and Elon Musk, along with Stephen Hawking and more than 1,000 robotics and artificial-intelligence researchers, signed an open letter warning that “autonomous weapons will become the Kalashnikovs” of the future, “ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group.” Google, which had several of its senior artificial-intelligence researchers among the letter’s signatories, published a statement of its own principles on artificial intelligence in June 2018, including an outright refusal to develop “weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.”

For Work, the question of morality might come down to America’s adversaries. “There may be a place and time in the future where we’re up against an opponent who is less worried about delegating lethal authority to a machine,” he says. “But that’s a long way off, and we have a lot of time to think about that and a lot of time to take action and make sure that we can still control all the weapons.”

Kelsey D. Atherton is a defense technology journalist based in Albuquerque, New Mexico, and a staff writer at C4ISRNET.

Sign up for our newsletter to get the best of At War delivered to your inbox every week. For more coverage of conflict, visit nytimes.com/atwar.

Advertisement

SKIP ADVERTISEMENT