Forget Bans: UN Stuck on Defining Killer Robots

Posted on Categories Discover Magazine

An unmanned military robot rolls out of a U.S. Marine amphibious vehicle during the Ship-to-Shore Maneuver Exploration and Experimentation Advanced Naval Technology Exercise 2017 at Marine Corps Base Camp Pendleton, California. Credit: Lance Cpl. Jamie Arzola

A United Nations meeting on lethal autonomous weapons ended in disappointment for advocates hoping that the world would make progress on regulating or banning “killer robot” technologies. The UN group of governmental experts barely even scratched the surface of defining what counts as a lethal autonomous weapon. But instead of trying to create a catch-all killer robots definition, they might have better luck next time focusing on the role of humans in controlling such autonomous weapons.

That idea of focusing on the role of humans in warfare has been supported by a number of experts and non-governmental organizations such as the International Red Cross. It would put the spotlight on the legal and moral responsibilities of soldiers and officers who might coordinate swarms of military drones or issue orders to a platoon of robotic tanks in the near future. And it avoids pitfalls surrounding the challenge of trying to define lethal autonomous weapons when artificial intelligence and robot technologies continue to evolve much faster than the slow-grinding gears of a UN body that meets just once a year.

“One criticism people have made, and rightly so, is that if you craft a ban on the state of technology today, you could be wrong about the technology in the near future,” says Paul Scharre, director of the Technology and National Security Program at the Center for a New American Security (CNAS) and author of the upcoming book “Army of None” scheduled for publication in spring 2018. “In this area of lethal autonomous weapons, you might be very wrong.”

One non-military example of how quickly AI technology outpaces regulatory discussions comes from the case of DeepMind Lab’s AlphaGo program. In the first half of 2016, AlphaGo defied expert predictions from just a few years ago by defeating the best human players in the ancient board game of Go. In 2017, DeepMind released an upgraded version of AlphaGo, called AlphaZero, which learned how to play chess within four hours and proceeded to beat the very best specialized chess-playing computer programs.

During that same period of jaw-dropping progress in AI, the international community accomplished very little despite convening several UN meetings on lethal autonomous weapons. The latest UN meeting held in November 2017—involving the Group of Governmental Experts on Lethal Autonomous Weapons Systems—had few accomplishments other than agreeing to meet again for 10 days in 2018 and prepare to do it all again.

The Stumbling Block on Banning Killer Robots

A big problem for advocates looking to ban lethal autonomous weapons is that they have no support from the leading military powers that would be most likely to deploy and use such autonomous weapons. Many leading AI researchers and Silicon Valley leaders have called for a ban on autonomous weapons. But non-governmental organizations (NGOs) mostly stand without the support of national governments in trying to convince the world’s military giants that they should avoid adding lethal autonomous weapons to their arsenals.

“You have a cadre of NGOs basically telling major nation states—a number of great powers such as Russia, China and the United States that have all said AI will be central to the future of national security and warfare—that they can’t have these weapons,” Scharre says. “The reaction of the military powers is, ‘Of course I would use them responsibly, who are you to say?’”

This seems consistent with past expectations of the likelihood that a ban on lethal autonomous weapons might succeed. In October 2016, the Chatham House Think Tank based in London held a roleplaying exercise to imagine a future scenario where China becomes the first country to use lethal autonomous weapons in warfare. That exercise, which focused on the viewpoints of the United States, Israel and European countries, found that none of the experts roleplaying various governments were willing to sign onto even a temporary ban on autonomous weapons.

NGOs such as The Campaign to Stop Killer Robots point to the fact that at least 22 countries want a legally binding agreement to ban lethal autonomous weapons. But Scharre noted that none of those countries are among the major military powers developing the necessary AI technologies for deploying lethal autonomous weapons.

Russia’s Says Nyet to the Ban

In fact, Russia may have already dug the proverbial grave for any potential killer robots ban by announcing that it would not be bound by any international ban, moratorium or regulation on lethal autonomous weapons. Journalist Patrick Tucker at Defense One described the Russian statement that coincided with the UN meeting of governmental experts in the following way.

Russia’s Nov. 10 statement amounts to a lawyerly attempt to undermine any progress toward a ban.a lawyerly attempt to undermine any progress toward a ban. It argues that defining “lethal autonomous robots” is too hard, not yet necessary, and a threat to legitimate technology development.

Tucker went on to cite several anonymous experts who attended the UN meeting who complained that the five-day meeting barely even touched on the fundamental step of defining lethal autonomous weapons.

Finding common ground on definitions of killer robots may seem like basic stuff, but in some sense it’s a necessity for governmental representatives to make sure they’re not just talking past one another. “One person might be envisioning a Roomba with a gun on it, another person might be envisioning the Terminator,” Scharre says.

How Killer Robots Could Change Human Soldiers

Perhaps major military powers such as Russia and the United States may find themselves better able to agree upon the obligations and responsibilities of the humans issuing orders to future swarms of autonomous weapons. But potential pitfalls remain even if they succeed there. One of the biggest challenges is that the rise of killer robots could lead to military leaders or individual soldiers feeling less responsible for their actions after unleashing a swarm of killer robots upon the battlefields of tomorrow.

“The thing that worries me is what if we get to the point where humans are accountable, but the humans don’t actually feel like they’re the ones doing the killing and making decisions anymore?” Scharre says.

The heart of the military profession is about making decisions on the use of force. As a former U.S. Army Ranger, Scharre expressed concern that lethal autonomous weapons could end up creating more psychological distance between a soldier’s sense of individual responsibility and the act of using a potentially lethal weapon. Yet he noted that very little has been written about these future implications for military professional ethics.

In other words, the world could eventually clarify the legal framework for how humans hold moral and legal responsibility in warfare when wielding lethal autonomous weapons. But the rise of killer robots may still lead military leaders and individual soldiers to feel less empathy and restraint for those people on the receiving end of such weapons–and make it easier for them to forget about their moral and legal responsibilities.

“I think technology has forced upon us a fundamental question of the human role in the lethal decision making in war,” Scharre says.

Leave a Reply