Google and OpenAI employees publicly back Anthropic, urging ethical limits on military AI applications.
Hundreds of employees from major artificial intelligence companies are publicly supporting Anthropic in a growing dispute with the U.S. military over how AI technology should be used. More than 300 workers from Google and over 60 employees from OpenAI have signed an open letter urging their companies to defend strict limits on military AI use.
The letter comes as the Pentagon reportedly pushes for broader access to advanced AI systems, raising concerns among AI researchers about surveillance, weapons automation, and ethical boundaries.
Anthropic Draws Clear Ethical Lines
Anthropic, known for developing its Claude AI models, has taken a firm position against allowing its technology to be used for domestic mass surveillance or fully autonomous weapons. The company has already worked with the U.S. Department of Defense in limited ways, but refuses to cross these ethical boundaries. Anthropic CEO Dario Amodei reiterated the company’s stance in a public statement, saying the company cannot agree to unrestricted military use of its AI.
The Pentagon reportedly warned that if Anthropic refuses to comply, it could label the company a supply-chain risk or invoke the Defense Production Act (DPA). The DPA is a powerful U.S. law that allows the government to require companies to prioritize national security production. Amodei responded by highlighting what he called a contradiction in the government’s position—simultaneously treating Anthropic as both a potential risk and an essential supplier.
Employees Unite Across Rival Companies
The open letter shows rare unity among employees of competing AI firms. Workers from Google and OpenAI urged their leadership to stand with Anthropic and maintain strict limits on military applications.
The letter warned that government pressure could divide AI companies and weaken their collective ability to resist demands they consider unethical. Signatories wrote that companies should “put aside their differences and stand together” to protect agreed-upon safety boundaries. Their main concerns include preventing AI from enabling:
- Mass domestic surveillance
- Fully autonomous weapons
- Unrestricted military control over advanced AI systems
This coordinated employee action highlights growing internal activism within tech companies on how AI should be deployed.
Leadership Responses Signal Sympathy
While neither Google nor OpenAI has issued an official corporate response, individual leaders have expressed concerns aligned with Anthropic’s position. OpenAI CEO Sam Altman said publicly that he does not believe the Pentagon should threaten companies into compliance using government authority.
Similarly, Jeff Dean criticized mass surveillance in public remarks, warning it could violate civil liberties and be misused for political purposes. These comments suggest that senior AI leaders recognize the ethical risks involved, even as their companies continue discussions with government agencies.
Military Already Uses AI for Limited Tasks
The U.S. military is already using AI tools for non-classified applications. Systems such as ChatGPT, Google’s Gemini, and other AI platforms assist with administrative tasks, data analysis, and research. However, expanding AI use into classified intelligence, surveillance, or autonomous defense systems would represent a much larger shift.
Defense officials argue that advanced AI could improve national security, intelligence analysis, logistics, and cyber defense. But critics worry that expanding military access too quickly could create long-term ethical and societal risks.
Ethical Debate Reflects Broader Industry Tensions
The dispute highlights a major debate shaping the future of artificial intelligence: how to balance national security with ethical responsibility. AI systems are becoming increasingly powerful, capable of analyzing massive datasets, identifying patterns, and automating complex decisions. These capabilities could help governments defend against cyberattacks and threats.
At the same time, AI could enable intrusive surveillance or autonomous weapons that operate without human control. Many AI researchers believe strong safeguards must be established before such uses become widespread. Anthropic has positioned itself as a leader in AI safety, emphasizing responsible deployment and transparency. Its refusal to comply fully with Pentagon demands reflects a growing movement within the tech industry to set ethical limits.
Defense Production Act Raises Serious Concerns
The Pentagon’s potential use of the Defense Production Act has intensified the situation. Historically, the DPA has been used during wartime or emergencies to ensure the production of critical materials. Applying it to AI companies would mark a major escalation and could reshape the relationship between the government and private AI developers.
If invoked, companies could be legally required to provide technology or prioritize military contracts. This possibility has alarmed employees who fear it could undermine corporate autonomy and ethical commitments.
AI Companies Face Increasing Government Pressure
Governments worldwide are competing to secure access to advanced AI systems. Artificial intelligence is now considered a strategic technology, similar to nuclear power or semiconductors. Military leaders believe AI could improve battlefield awareness, intelligence gathering, and defense planning.
At the same time, private companies have become the primary developers of cutting-edge AI, giving them significant influence over how the technology is used. This creates tension between corporate ethics and national security priorities.
Industry Unity Could Shape Future AI Policy
The open letter represents one of the largest coordinated protests by AI workers against military AI expansion. Employee activism has previously influenced major tech decisions, including cloud contracts and surveillance technologies.
If companies like Google, OpenAI, and Anthropic maintain unified ethical standards, they could shape how AI is used globally. However, if even one company agrees to broader military use, others may feel pressure to follow to remain competitive. This dynamic makes industry coordination critical.
Future of Military AI Remains Uncertain
The Pentagon continues negotiations with AI companies, but Anthropic has made clear it will not compromise its core ethical principles. The outcome could set an important precedent for how AI companies interact with governments in the future.
If companies successfully resist government pressure, it could reinforce corporate control over AI deployment. If governments gain broader authority, it could accelerate military AI adoption worldwide.
Conclusion
The open letter from Google and OpenAI employees marks a pivotal moment in the relationship between AI companies and the military. By supporting Anthropic’s refusal to allow mass surveillance or autonomous weapons, workers are pushing for ethical safeguards in one of the most powerful technologies ever created.
As artificial intelligence becomes increasingly central to national security, the decisions made now will shape how AI is used for decades to come. The standoff between Anthropic and the Pentagon highlights the growing struggle between technological innovation, ethical responsibility, and government power.
