Google Decides Not to Renew a Military AI Contract

Posted on Categories Discover Magazine

The U.S. military logo for Project Maven, also known as the Algorithmic Warfare Cross-Functional Team. Credit: U.S. Department of Defense

Google recently bowed to employee protests by deciding to wind down involvement in a U.S. military initiative called Project Maven next year. The Pentagon project focuses on harnessing deep learning algorithms–specialized machine learning technologies often described as “artificial intelligence”–to automatically detect and identify people or objects in military drone surveillance videos.

Company emails and internal documents obtained by the New York Times show Google’s attempts to keep its role in the U.S. Department of Defense project under wraps. By early April, more than 3,000 Google employees had already signed an internal letter voicing concerns that Google’s involvement with “military surveillance” could “irreparably damage Google’s brand and its ability to compete for talent.” On June 1, Gizmodo reported that Google’s leadership had told employees that the company would not seek renewal of the Project Maven contract after its 2019 expiration.

“I’m glad to see that the Google leadership is listening to the Google employees, who like me, think it would be a serious mistake for Google to do military contracts,” said Yoshua Bengio, a professor of computer science at the University of Montreal in Canada and a pioneer in deep learning research.

Project Maven, also known as the Algorithmic Warfare Cross-Functional Team, seems initially focused on training computer algorithms to automatically spot and classify objects in videos. Such automated surveillance technologies already exist to some degree and could spare the Pentagon’s human analysts from spending countless hours eyeballing thousands of hours of surveillance footage taken by large military drones in countries such as Syria, Iraq and Afghanistan.

Similar automated surveillance technologies can be used for beneficial purposes beyond military AI on battlefields. For example, Carnegie Mellon University researchers have developed machine learning software that can automatically detect both wildlife and human poachers in drones’ thermal camera imagery taken at night.

Nonetheless, the idea of developing automated surveillance technologies for use by the U.S. military touched a nerve among Google employees. The letter signed by several thousand Google employees argues that such surveillance capabilities could easily be used to assist in drone strikes and other missions with “potentially lethal outcomes.”

Stuart Russell, a professor of computer science and AI researcher at the University of California, Berkeley, said that he does not personally oppose all uses of AI for military purposes. For example, he suggested that military AI used in reconnaissance, logistical planning and anti-missile defense could fall under ethical uses of such technology.

Many AI researchers, including Bengio and Russell, have publicly opposed development of technologies for lethal autonomous weapons that could actively identify and engage targets without requiring direct orders from humans. So far, Project Maven’s goals are not directly tied to development of such autonomous weapons, which are colloquially referred to as “killer robots” by many who oppose them. Researchers recently organized a boycott campaign that led to a South Korean university agreeing not to develop autonomous weapons under a prior agreement with a defense company.

But the dual-use nature of AI technologies that could be repurposed for lethal weapons or missions makes it trickier to regulate usage of such technologies. The same military AI technology that enables automated reconnaissance could also empower an autonomous weapon if combined with a vision-guided missile, Russell pointed out. Still, he suggested that an international ban on the use of autonomous weapons might have helped nip the issue in the bud for Google and its employees.

“If there were a treaty banning autonomous weapons, then Google researchers could work on defense-related AI without worrying that the AI would be used to kill people,” Russell said. “The threat of misuse goes away.”

The recent decision on Project Maven does not mean Google will necessarily withhold all its engineering expertise and technologies from the Pentagon in the future. After all, Eric Schmidt, former executive chairman of Google and current technical advisor to Google parent company Alphabet, remains a member of the Defense Innovation Board that serves as an advisory organization for the U.S. military. But even if Google steps back entirely from such military AI contracts, it’s almost certain that other U.S. tech giants and defense companies will be eager to take on such work.

Leave a Reply