Pentagon demands unlimited access to Claude: why Anthropic refused and what it means for the tech market

A significant conflict has unfolded in the United States over who should control the application of powerful generative AI models in military contexts. During contract negotiations, the U.S. Department of Defense (Pentagon) demanded more extensive, effectively unlimited, access to Anthropic’s AI model, Claude. Anthropic refused, and the dispute quickly escalated into a federal political decision. […]

Asia-Plus

A significant conflict has unfolded in the United States over who should control the application of powerful generative AI models in military contexts. During contract negotiations, the U.S. Department of Defense (Pentagon) demanded more extensive, effectively unlimited, access to Anthropic’s AI model, Claude. Anthropic refused, and the dispute quickly escalated into a federal political decision.

 

What did the Pentagon want?

According to Reuters, the disagreement centered around the terms of use. The military insisted that Anthropic loosen or rewrite the restrictions on Claude’s use so that it could be employed "for any lawful purposes" in defense-related activities. Meanwhile, Anthropic sought stronger, more specific prohibitions on certain use cases. The dispute also involved deadlines for contract acceptance, with the Pentagon negotiating with multiple suppliers of large language models.

 

Why Anthropic said "no"

Anthropic’s public stance is centered on two key “red lines”:

·         The AI must not be used for mass internal surveillance.

·         The AI must not be used in fully autonomous lethal systems without human oversight.

These points, according to Reuters and The Guardian, were non-negotiable for the company, even at the cost of losing some government contracts.

 

Political outcome: federal agencies ordered to stop using Anthropic

The situation quickly escalated beyond closed-door negotiations. Reuters reported that, amid tensions, the U.S. Treasury Department and the Federal Housing Finance Agency (FHFA) announced they would stop using Anthropic's products, citing a presidential directive. The company was also labeled as a "supply-chain risk," effectively blocking its access to much of the federal market and opening the door for competitors to vie for defense contracts.

 

Competitors’ response: the OpenAI example

In response to pressure on Anthropic, OpenAI announced a deal to deploy its technologies in the U.S. Department of Defense’s classified networks. However, this agreement sparked an internal debate within the industry and among experts. The Verge noted that the term "any lawful use" could be problematic, as what’s considered "lawful" in intelligence or surveillance often has a broad, evolving interpretation.

OpenAI, in an official statement, asserted that its agreement includes "layered" safeguards and clear "red lines," urging that such conditions should be accessible to all AI companies, not just one.

 

Internal rebellion in big tech: employees sign open letters

The issue has sparked notable responses from employees within tech companies. TechCrunch, Forbes, Business Insider, Axios, and Engadget reported on an open letter signed by hundreds of Google employees and dozens of OpenAI staff. The letter expressed solidarity with Anthropic's position, calling for limits on military access to AI without explicit restrictions on surveillance and the autonomous use of force.

 

Why this matters beyond the U.S.

This conflict sets a new precedent for the global AI market:

·         Governments seek guaranteed access to the best models with fewer contractual restrictions.

·         AI developers fear reputational and legal risks if their models are used in contentious scenarios (such as surveillance, target selection, or use of force decisions).

·         There is growing competition over "whose agreement is safer," with the risk that "safety" becomes a PR contest rather than a set of transparent rules.

For Central Asia—and countries where the public sector is a key tech customer—a practical takeaway is clear: AI contracts will increasingly include clauses on acceptable use cases, audits, request logging, data storage, and “red lines.” Without such protections, suppliers may hesitate to engage in government projects. At the same time, authorities will likely push for broader mandates that allow “any lawful use.”

 

What’s next?

Reuters notes that U.S. agencies are beginning the process of divesting from Anthropic’s solutions, while the defense AI contract market is shifting in favor of competitors. In the coming months, the central question will not be "will AI be used in defense?" but rather, "under what rules and who defines those rules: law, contract, or internal company policy?"

Article translations:

Related Articles

Оби зулол

Most Read

Join us on social media!

Recent Articles

Trump Announces Possible Resumption of U.S.-Iran Talks

Pakistan is once again actively working on their organization.

Six international organizations have urged Tokayev to stop the persecution of journalists in Kazakhstan

According to human rights activists, the lack of decisive measures to protect the press may undermine the credibility of the "New Kazakhstan" development program.

Sherali Kabir Discussed the Creation of Tajikistan’s AI Ecosystem with the Management of Goldman Sachs

One of the world's largest investment banks may participate in the implementation of strategic AI projects in the republic.

Iran Intends to Seek Compensation Payments from Five Arab Countries

It concerns Bahrain, Jordan, Qatar, the UAE, and Saudi Arabia.