Google says it won't build AI for weapons

Andrew Cummings
June 10, 2018

Pichai's post also listed four application areas Google will not design or deploy AI in.

But while it's not clear yet whether these principles go far enough to address employees' concerns, the new rules could have an impact that reaches far beyond the walls of Google. Google plans to make technologies and services available in accordance with the above principles, but it can only evaluate the likely uses. The company claims that they will not use their AI for surveillance that violates "internationally accepted norms", which means that there could be scenarios where Google could use their AI for surveillance purposes.

The policy comes as Google employees reportedly pressured the tech giant to cancel its ongoing participation in Project Maven, a Pentagon effort to use AI to analyze footage from aerial drones. As a leader in AI, we feel a deep responsibility to get this right.


An internal memo also revealed that a member of Google's defense sales team believed that participation in Maven was directly tied to a government contract worth billions of dollars, according to The Intercept.

Thousands of Google employees signed an open letter to management urging the company to cut ties with the Department of Defence drone program after details were leaked. Google executives said last week that they would not renew the deal for the military's AI endeavor, known as Project Maven, when it expires next year.

Google explicitly states that it will continue to work with both the government and the military "in many other areas" not related to AI for weapons.


The blog post also notes that Google will not build AI technologies which will cause overall harm, weapons, including those which will "cause or directly facilitate injury to people". "These include cybersecurity, training, military recruitment, veterans' healthcare, and search and rescue", said Pichai.

Google will continue some work for the military, Pichai said. That being said, some employees had opposed the project and even quit in protest, however, the real issue was said to be a microcosm for anxiety in regards to AI at large and how the technology can and should be employed, reported TechCrunch. There's no true accountability to the public here, and Google hasn't committed to an independent review process, which was a suggestion by Electronic Frontier Foundation Chief Computer Scientist Peter Eckersley in a statement to Gizmodo.

As mentioned above, Google's policy means that it won't work on surveillance outside worldwide norms, but the definition of "international norms" is open to interpretation, and depends on who's doing the interpreting.


Other reports by iNewsToday

FOLLOW OUR NEWSPAPER