Posted on Categories Discover Magazine
Fears of a Terminator-style arms race have already prompted leading AI researchers and Silicon Valley leaders to call for a ban on killer robots. The United Nations plans to convene its first formal meeting of experts on lethal autonomous weapons later this summer. But a simulation based on the hypothetical first battlefield use of autonomous weapons showed the challenges of convincing major governments and their defense industries to sign any ban on killer robots.
In October 2016, the Chatham House think tank in London convened 25 experts to consider how the United States and Europe might react to a scenario in which China uses autonomous drone aircraft to strike a naval base in Vietnam during a territorial dispute. The point of the simulation was not to predict which country would first deploy killer robots, but instead focused on exploring the differences in opinion that might arise from the U.S. and European side. Members of the expert group took on roles representing European countries, the United States and Israel, and certain institutions such as the defense industry, non-governmental organizations (NGOs), the European Union, the United Nations and NATO.
The results were not encouraging for anyone hoping to achieve a ban on killer robots.
Neither the U.S., Israel nor any European country represented was willing to sign onto a temporary ban on the development or use of killer robot systems. Perhaps the representatives felt that the genie was already out of the bottle with the scenario of China having used such weapons, but in any case they seemed reluctant to restrict their own capabilities in deploying similar weapons.
The national governments seemed more willing to consider an international code of conduct about how such autonomous weapons might be used. But they opposed the idea of an international code of conduct that used certain “metrics” to evaluate the performance of autonomous weapons. The governments argued that having any sort of independent evaluation of their weapons performances would threaten their industrial and military security.
Some possible differences between the U.S. and Europe did emerge in terms of their broader views on arms-control agreements. One simulation participant pointed out that the U.S. tends to see arms-control agreements as tools for managing strategic order, such as the treaties to restrict certain nuclear missile technologies. The U.S. also tends to resist outside pressure to limit its sovereignty in deciding either human rights or military issues.
By comparison, European countries have been more open to arms-control treaties based on the humanitarian goals of limiting death and injury. For example, European countries have generally embraced the 1999 Ottawa Treaty on landmines and the 2008 Convention on Cluster Munitions. The U.S. declined to sign both of those treaties.
If the Europeans push for more of a human rights perspective to guide development of future agreements on autonomous weapons, they may find themselves in conflict with the U.S.
Overall, the national governments seemed fairly unenthusiastic about outright banning killer robot technology in this particular simulation. But a bigger difference of opinion emerged between the NGO groups and the defense and tech industries.
The NGO groups stood almost alone in pushing for a ban on lethal autonomous weapons. During the simulation, the NGO representatives declared they had enlisted 20 new countries willing to sign up for a killer robot ban in response to the first apparent battlefield use of such weapons, including South Korea, Japan, Canada and Norway.
But the NGOs failed to get the U.S., Israel or European countries represented in the simulation to sign onto the idea of a killer robot ban. That may reflect the similarly unenthusiastic response in real life. To date, just 14 countries have signed up to a call for a full ban on the development or use of killer robot systems. And none of those is a member of the European Union or a permanent member of the UN Security Council.
The defense industry pushed back hard against any ban on development and deployment of autonomous weapon systems. It also resisted any attempts by some national governments to impose a code of conduct on industry activities instead of focusing on restricting their own military uses of such technologies.
Unlike the defense industry, the lone tech industry representative did quietly lend some support and funding to the NGO effort to restrict or ban autonomous weapons during the simulation. But the tech industry representative did so without openly opposing killer robot technologies. That may reflect real-life efforts by the tech industry to cooperate on guidelines for the ethical use of artificial intelligence technologies.
In reality, some countries already have the capability to build and deploy autonomous weapons. Some have built “man-in-the-loop” restrictions into their weapons, but those could just as easily be removed. For example, the Israeli Aerospace Industries kamikaze drone known as the Harpy can already operate autonomously once it launches and loiters in the air until it detects anti-aircraft radar on the ground. That’s when it automatically dives down and crashes into the anti-aircraft radar installation to destroy it.
Perhaps the upcoming summer meeting of the United Nations Convention of Conventional Weapons Group of Governmental Experts on Lethal Autonomous Weapons Systems will bear some fruit for advocates of a killer robot ban. But various militaries already seem to be on the slippery slope of having developed and deployed semi-autonomous weapons. If the first fully autonomous weapon makes its battlefield debut in the near future, the Chatham House simulation suggests that most countries are unlikely to abandon their own killer robot programs.