Google employees respond after company drops its promise on AI weapons: 'Are we the baddies?'

3 hours ago 2
  • Google's new AI guidelines removed a promise not to use AI for weapons or surveillance.
  • Some employees have been reacting on the company's internal message board.
  • Google said it's important for businesses and governments to work together for "national security."

After Google retracted its promise not to use artificial intelligence for weapons or surveillance, some employees posted their reactions on the company's internal message board.

The company said on Tuesday that it had updated its ethical AI guidelines, which lay out how Google will and won't deploy its technology. The new version removed wording that previously vowed Google would not use AI to build weapons, surveillance tools, or "technologies that cause or are likely to cause overall harm."

Several Google employees expressed dismay at the change on the company's internal message board, Memegen, according to posts shared with Business Insider.

One meme showed CEO Sundar Pichai querying Google's search engine for "how to become weapons contractor?"

Another employee riffed on a popular meme of an actor dressed as a Nazi soldier in a TV comedy sketch. "Google lifts a ban on using its AI for weapons and surveillance," it read. "Are we the baddies?"

Another post showed Sheldon from The Big Bang Theory asking why Google would drop its red line for weapons before seeing media reports about Google working more closely with defense customers, including the Pentagon, and responding, "Oh, that's why."

These are memes shared by just a handful of Google staff. The company has over 180,000 employees, so these comments reflect a fraction of the workforce. Some Googlers may support tech companies working more closely with defense customers and the US government.

In recent years, there's been a shift among some tech companies and startups toward offering more of their technology, including AI tools, for defense purposes.

While Google did not directly acknowledge the removal of the wording, Google DeepMind CEO Demis Hassabis and SVP for Technology and Society James Manyika co-authored a blog post on Tuesday in which they described an "increasingly complex geopolitical landscape" and said it was important for businesses and governments to work together in the interest of "national security."

"We believe democracies should lead in AI development, guided by core values like freedom, equality, and respect for human rights," they wrote in the blog post. "And we believe that companies, governments, and organizations sharing these values should work together to create AI that protects people, promotes global growth, and supports national security."

Reached for comment, a Google spokesperson pointed BI to the company's Tuesday blog post.

In 2018, Google employees protested a program between the company and the Pentagon that used Google's AI for warfare. The company abandoned the contract and laid out a set of AI principles that included examples of things the company would not pursue, explicitly mentioning "weapons" and surveillance tools.

While the blog post about the 2018 principles is still live, it now includes a link at the top pointing users to the updated guidelines.

Google's decision to draw red lines around weaponry has left it out of military deals signed by other tech giants, including Amazon and Microsoft. Huge strides in AI have been made since 2018, and the US is now competing with China and other countries for supremacy in the technology.

Are you a current or former Google employee with more to share? You can reach this reporter using the secure messaging app Signal (+1 628-228-1836) or secure email ([email protected]). We can keep you anonymous.

Read Entire Article
| Opini Rakyat Politico | | |