In a major shift from its previous ethical position, Google has quietly removed key passages from its AI principles that previously prohibited the development of AI for potentially harmful applications, including weapons. This modification signals a significant departure from the company's earlier commitment to responsible AI development.
The tech giant eliminated a crucial section titled "AI applications we will not pursue," which had explicitly stated Google's refusal to develop technologies "that cause or are likely to cause overall harm." The change has raised concerns among AI ethics experts and former Google employees.
Margaret Mitchell, former co-lead of Google's ethical AI team, expressed serious concerns about this policy shift. "This means Google will probably now work on deploying technology directly that can kill people," Mitchell stated, highlighting the gravity of the company's new direction.
In response to questions about the change, Google referenced a blog post by senior executives James Manyika and Demis Hassabis. The post emphasized that democracies should lead AI development, guided by values such as freedom, equality, and human rights. It also called for collaboration between companies, governments, and organizations to create AI systems that support national security while promoting global growth.
This policy revision appears to align with Google's recent moves toward increased military collaboration. The company has already been providing cloud services to U.S. and Israeli military organizations, decisions that have sparked internal employee protests.
The change may position Google to compete more effectively with other tech companies already involved in military AI projects. It could also open doors to increased government funding for AI research and development.
This strategic shift represents a stark contrast to Google's original "Don't be evil" motto and follows a broader industry trend where tech giants are revising their previously held ethical positions. The move suggests Google has determined that the potential benefits of military AI collaboration outweigh possible negative public reaction.