TipsMake
Newest

Why did the US government replace Anthropic with OpenAI as its AI provider?

Anthropic's rival, OpenAI, has just replaced it as the primary AI provider for the US government. This decision comes after a dispute between Anthropic and the government regarding the loosening of AI safety regulations, and also contributed to the increased popularity of the Claude chatbot among users.

 

Amid escalating conflict in Iran, several reports indicate that Anthropic's AI models have been used to aid in attack planning. The tension between the company and the US government has therefore drawn significant attention to the ethical implications of using AI in military applications.

Anthropic has refused to remove AI safeguards designed to prevent the use of the technology for autonomous weapons or surveillance systems. This move has led the Pentagon, under Defense Secretary Pete Hegseth, to consider the company a "supply chain risk." Anthropic says it will challenge the decision and is prepared to take the matter to court.

 

Why did the US government replace Anthropic with OpenAI as its AI provider? Picture 1

Following that dispute, Sam Altman's OpenAI quickly filled the void, replacing Anthropic as the primary AI tool provider for numerous federal agencies, including several units within the U.S. Department of Defense.

Public reaction

Despite its ongoing disputes with the government, Anthropic has garnered significant public attention. The company's Claude app has climbed to the top of the App Store rankings in the US and UK, surpassing even OpenAI's ChatGPT.

 

Part of the reason is that many users began abandoning ChatGPT after news emerged of OpenAI's partnership with the Pentagon. In response to public backlash, CEO Sam Altman asserted that OpenAI's technology would not be intentionally used to spy on American citizens domestically.

Following a wave of criticism, OpenAI stated that it had worked with the government to add new provisions to the cooperation agreement. The company asserted that this agreement has more safeguards than any previous confidential AI agreement, including those previously signed with Anthropic.

Following the decision to change vendors, many federal agencies such as the State Department, the Treasury Department, and the U.S. Department of Health & Human Services have stopped using Anthropic's Claude platform and switched to OpenAI's AI models such as GPT-4.1.

The U.S. State Department also confirmed that its internal chatbot, StateChat, has switched to using OpenAI technology to maintain its AI-powered support services.

Why did the US government replace Anthropic with OpenAI as its AI provider? Picture 2

 

The replacement of Anthropic also affects defense contractors. Palantir, which had integrated Claude into its Maven Smart Systems platform, has been asked to remove Anthropic's AI. Several other businesses that previously used Claude for intelligence analysis and decision support processes are also making similar adjustments.

Several organizations in the technology industry have expressed concerns about labeling Anthropic as a 'supply chain risk,' arguing that it could disrupt technology procurement and slow innovation.

According to a BBC report, Anthropic's investors were also dissatisfied. They argued that while the company's safety-focused approach was commendable in principle, it cost the business significant commercial opportunities.

Can OpenAI be used for surveillance or autonomous weapons?

The US government's shift from Anthropic to OpenAI highlights the tension between national security needs and the ethical standards of AI. However, OpenAI asserts that it also sets clear limits on the use of its technology.

According to the company's statement, there are three main principles guiding its collaboration with the U.S. Department of Defense. First, OpenAI's technology will not be used for mass surveillance systems targeting domestic citizens. Second, the company's AI will not be used to control autonomous weapon systems. Third, the AI ​​will not be used for high-risk automated decisions, such as social scoring systems.

OpenAI also argues that some other AI labs have loosened or removed many safeguards and relied primarily on policies used to control risk in national security projects. According to the company, their approach better mitigates the risk of technology misuse.

Overall, OpenAI becoming the new AI provider for the US government is seen as a significant turning point in the nation's AI policy. This decision not only impacts ethical standards in AI development but also shapes how the US government deploys and procures technology in the future.

Discover more Anthropic Claude AI
Marvin Fry
Share by Marvin Fry
Update 10 March 2026