Trump Orders Federal Agencies to Halt Use of Anthropic AI Technology

President Donald Trump has directed all federal agencies, including the Defense Department, to “immediately cease” the use of technology from the AI firm Anthropic. This announcement came after a contentious exchange between Anthropic and the Pentagon regarding the use of its AI platform, Claude, which is employed across various government levels, including classified networks.

The order was issued on Friday, following a statement from Anthropic’s CEO, Dario Amodei, who asserted that he would not allow the platform to be used for mass surveillance of U.S. citizens or to control fully autonomous weapons. Trump characterized this stance as an attempt to “strong-arm” the Defense Department into complying with Anthropic’s conditions.

“I am directing every agency in the United States Government to IMMEDIATELY CEASE all use of Anthropic’s technology,” Trump stated in a post on Truth Social. He emphasized that the government does not need or want Anthropic’s services and announced a six-month phase-out period for agencies currently utilizing its products.

During this period, Trump warned that Anthropic could face consequences if it does not assist in the transition away from its technology. “Anthropic had better get their act together and be helpful during this phase-out period, or I will use the full power of my Presidency to make them comply, with major civil and criminal consequences to follow,” he added.

In a related development, Defense Secretary Pete Hegseth ordered the Pentagon to designate Anthropic as a supply-chain risk to national security. Hegseth did not clarify why the government would continue to use the company’s technology in classified settings for an additional six months while citing such risks.

Amodei responded to this designation as contradictory, noting that while the Pentagon labeled Anthropic a security risk, it simultaneously required access to its AI models. He stated, “They have threatened to remove us from their systems if we maintain these safeguards; they have also threatened to designate us a ‘supply chain risk’—a label reserved for U.S. adversaries, never before applied to an American company.”

Founded in 2021, Anthropic has developed AI models that are already widely integrated within the federal government, primarily through its partnership with Amazon Web Services. In July 2023, Anthropic, alongside other major AI firms, secured $200 million in defense contracts aimed at advancing the Pentagon’s AI capabilities.

On the same day as Trump’s announcement, the General Services Administration (GSA) stated it would remove Anthropic from its Multiple Awards Schedule and its USAI.gov platform. GSA Commissioner Josh Gruenbaum confirmed the termination of Anthropic’s OneGov deal, effectively ending contract availability across the Executive, Legislative, and Judicial branches.

“GSA stands with the President in rejecting attempts to politicize work dedicated to America’s national security,” GSA Administrator Edward C. Forst said. He stressed the importance of building resilient and secure AI solutions, indicating a commitment to collaborate with industry partners that align with national security priorities.

The rhetoric surrounding this issue has been notably charged. Hegseth accused Anthropic of “duplicitous” behavior, suggesting that its leadership prioritizes corporate interests over national safety. Sean Parnell, a Pentagon spokesperson, asserted that the Department of Defense (DoD) only seeks lawful access to Anthropic’s models, dismissing claims of wanting autonomous weapons or mass surveillance as false narratives propagated by critics.

In response, Amodei reiterated that his company insists on limits regarding the use of its technology, stating that certain applications could undermine democratic values. “In a narrow set of cases, we believe AI can undermine, rather than defend, democratic values,” he explained.

As of now, Anthropic has not issued an immediate comment following Trump’s directive. The situation continues to unfold as federal agencies prepare for the phase-out of the technology and discussions around national security and AI ethics persist.