Google AI won't be used for weapons, unreasonable surveillance

James Delahunty
8 Jun 2018 3:51

Google AI technology will not be developed and used for military weapons or to enable unreasonable surveillance on civilians, according to Pichai.
In a blog post that sought to make clear Alphabet / Google's priorities in the emerging AI space, chief executive Sundar Pichai made it clear that Google AI software will not be permitted for use in weapons systems and other controversial programs. This follows recent internal protests at Google about some of the projects it was involved in, including one that used AI to identify objects in drone footage.
Responding to the understandable concern that such technology could be used for more efficient killing of human beings, Pichai made it clear that the firm will not design or deploy AI for technology that is likely to cause overall harm.
"Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints," Pichai writes.
He went on to specifically rule out the use of Google Ai in "weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people."
In addition to addressing concerns about the military uses for AI, Pichai also wrote that its software will not be used in technologies that gather or use information for surveillance violating internationally accepted norms.
Read the blog post at: blog.google

More from us
Tags
Google
We use cookies to improve our service.