Trump Orders Federal Ban on Anthropic AI Technology Amid Pentagon Standoff Over Military Use Restrictions

Date:

WASHINGTON — President Donald Trump commanded all federal agencies Friday to phase out Anthropic’s artificial intelligence technology, escalating a bitter public confrontation between the Pentagon and the prominent AI company over military use restrictions that has divided Silicon Valley and exposed fundamental tensions about autonomous weapons, government surveillance, and the boundaries of corporate resistance to national security demands.

Defense Secretary Pete Hegseth speaks during a cabinet meeting at the White House, Thursday, Jan. 29, 2026, in Washington. (AP Photo/Evan Vucci)

Trump’s directive came slightly more than one hour before a Pentagon-imposed deadline requiring Anthropic to grant unrestricted military access to its Claude chatbot or face punitive measures—and nearly 24 hours after CEO Dario Amodei declared his company “cannot in good conscience accede” to Defense Department demands he characterized as eliminating essential safeguards against misuse.

Anthropic did not immediately respond to requests for comment regarding Trump’s announcement, which transforms what began as a contract dispute into a government-wide technology ban affecting one of artificial intelligence’s most valuable and influential startups.

The conflict centers on fundamental disagreements about AI’s appropriate role in national security operations and growing concerns about how increasingly capable machine learning systems could be deployed in high-stakes situations involving lethal force, sensitive intelligence collection, or domestic surveillance programs.

Anthropic, the San Francisco-based maker of Claude, could financially absorb losing its Pentagon contract given substantial backing from technology investors and corporate partners. However, Defense Secretary Pete Hegseth’s ultimatum this week posed existential risks at the apex of the company’s meteoric ascent from obscure computer science research laboratory to one of the world’s most prominent AI developers.

Military officials warned that beyond contract cancellation, they would designate Anthropic “a supply chain risk”—a classification typically reserved for foreign adversaries that could devastate the company’s critical partnerships with other businesses and government entities. Simultaneously, Pentagon lawyers threatened invoking the Cold War-era Defense Production Act to commandeer Claude’s technology regardless of Anthropic’s objections.

Amodei confronted an impossible dilemma. Capitulating to Pentagon demands risked destroying trust throughout the booming AI industry, particularly among elite researchers and engineers attracted to Anthropic specifically because of its commitments to responsibly developing artificial intelligence systems that, absent rigorous safeguards, could pose catastrophic dangers to humanity.

Anthropic sought narrow assurances from the Pentagon that Claude would not be employed for mass surveillance of Americans or integrated into fully autonomous weapons systems operating without human oversight. However, after months of confidential negotiations exploded into public controversy, the company issued a Thursday statement declaring that new contract language “framed as compromise was paired with legalese that would allow those safeguards to be disregarded at will.”

Sean Parnell, the Pentagon’s chief spokesman, asserted via social media that the military “has no interest in using AI to conduct mass surveillance of Americans (which is illegal) nor do we want to use AI to develop autonomous weapons that operate without human involvement.” He emphasized that defense officials simply want to “use Anthropic’s model for all lawful purposes,” though neither he nor other Pentagon representatives detailed specific intended applications for the technology.

Emil Michael, defense undersecretary for research and engineering, subsequently attacked Amodei personally, alleging on platform X that the Anthropic CEO “has a God-complex” and “wants nothing more than to try to personally control the US Military and is ok putting our nation’s safety at risk.” The inflammatory rhetoric from a senior defense official signaled how thoroughly the dispute had deteriorated beyond standard contract negotiations into ideological confrontation.

Michael’s characterization failed to resonate throughout much of Silicon Valley, where growing numbers of technology workers from Anthropic’s primary competitors—OpenAI and Google—publicly endorsed Amodei’s position Thursday through an open letter expressing solidarity with his refusal to compromise on safety principles.

OpenAI and Google, along with Elon Musk’s xAI, maintain existing contracts supplying their AI models to military applications, creating competitive dynamics where Pentagon officials hope to leverage rival companies against Anthropic’s resistance.

Musk aligned with the Trump administration Friday, declaring on his social media platform that “Anthropic hates Western Civilization” after Michael highlighted a previous iteration of Claude’s guiding principles encouraging “consideration of non-Western perspectives.” All leading AI models—including Musk’s Grok and OpenAI’s ChatGPT—operate according to programmed instructions governing chatbot values and behavior, which Anthropic terms a “constitution.”

While several Trump-allied technology leaders joined the controversy—including Musk and Palmer Luckey, co-founder of defense contractor Anduril—the polarizing debate over “woke AI” has positioned other executives uncomfortably as they balance commercial interests against ideological pressures.

“The Pentagon is negotiating with Google and OpenAI to try to get them to agree to what Anthropic has refused,” the open letter from some OpenAI and Google employees asserted. “They’re trying to divide each company with fear that the other will give in.”

In a surprising development from one of Amodei’s fiercest commercial rivals, OpenAI CEO Sam Altman sided Friday with Anthropic and questioned the Pentagon’s “threatening” approach during a CNBC interview, suggesting that OpenAI and most AI developers share identical ethical boundaries. Amodei previously worked for OpenAI before he and other leaders departed to establish Anthropic in 2021 amid disagreements about the original company’s direction.

“For all the differences I have with Anthropic, I mostly trust them as a company, and I think they really do care about safety,” Altman told CNBC. “I’ve been happy that they’ve been supporting our warfighters. I’m not sure where this is going to go.”

Concerns about the Pentagon’s confrontational strategy extended beyond Silicon Valley to Capitol Hill, where both Republican and Democratic lawmakers questioned the wisdom of publicly threatening a strategic American technology company. Senator Thom Tillis, a North Carolina Republican not seeking reelection, criticized Pentagon officials for conducting contract negotiations through public ultimatums rather than confidential discussions.

“Why in the hell are we having this discussion in public?” Tillis told journalists. “This is not the way you deal with a strategic vendor that has contracts. When a company is resisting a market opportunity for fear of negative consequences, you should listen to them and then behind closed doors figure out what they’re really trying to solve.”

Senator Mark Warner of Virginia, the ranking Democrat on the Senate Intelligence Committee, expressed being “deeply disturbed” by accounts that the Pentagon was “working to bully a leading U.S. company.” Warner characterized the episode as “further indication that the Department of Defense seeks to completely ignore AI governance,” underscoring “the need for Congress to enact strong, binding AI governance mechanisms for national security contexts.”

Retired Air Force General Jack Shanahan, a former Defense Department artificial intelligence initiatives leader, raised concerns about the Pentagon’s approach despite his own history confronting technology sector resistance. Shanahan led Project Maven during Trump’s first administration—an initiative using AI to analyze drone footage and identify weapons targets that triggered massive Google employee protests, ultimately causing the technology giant to withdraw from the contract and pledge against AI weaponry applications.

“Since I was square in the middle of Project Maven & Google, it’s reasonable to assume I would take the Pentagon’s side here,” Shanahan wrote Thursday on social media. “Yet I’m sympathetic to Anthropic’s position. More so than I was to Google’s in 2018.”

Shanahan noted that Claude already operates extensively across government agencies including classified environments, and characterized Anthropic’s restrictions as “reasonable.” He emphasized that large language models powering chatbots like Claude remain “not ready for prime time in national security settings,” particularly for fully autonomous weapons systems. “They’re not trying to play cute here,” he wrote, defending Anthropic’s motivations.

The confrontation exposes fundamental tensions about democratic accountability and technological governance in an era when artificial intelligence capabilities are advancing faster than regulatory frameworks or ethical consensus can accommodate. Pentagon officials insist that operational necessity and national security imperatives require unrestricted access to cutting-edge AI systems, while technology companies argue that precisely because these tools have become so powerful, robust safeguards against misuse are essential.

Amodei emphasized Thursday that threats to designate Anthropic a security risk while simultaneously invoking the Defense Production Act to commandeer its technology were “inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security.” He expressed hope that Pentagon officials would reconsider given Claude’s substantial value to military operations, but indicated that absent reconsideration, Anthropic “will work to enable a smooth transition to another provider.”

The dispute occurs against a broader backdrop of cultural transformation within Defense Department legal ranks. Hegseth told Fox News last February—weeks after becoming defense secretary—that “ultimately, we want lawyers who give sound constitutional advice and don’t exist to attempt to be roadblocks to anything.” That same month, Hegseth dismissed the Army and Air Force top lawyers without explanation, while the Navy’s chief legal officer had resigned shortly after the late 2024 election.

These personnel changes signal deliberate efforts to reduce legal constraints on military operations, creating environments where concerns about AI misuse might receive less rigorous scrutiny than technology companies and civil liberties advocates believe necessary.

The Associated Press previously documented Amodei’s Thursday statement that Anthropic “cannot in good conscience accede” to Pentagon demands, noting that new Defense Department contract language “made virtually no progress on preventing Claude’s use for mass surveillance of Americans or in fully autonomous weapons.” The company emphasized it was not abandoning negotiations but could not accept terms eliminating protections it considers fundamental.

The controversy’s resolution will establish precedents affecting how democratic societies balance innovation imperatives, national security requirements, and ethical constraints on emerging technologies whose capabilities and applications remain imperfectly understood. Whether Trump’s federal ban pressures Anthropic toward capitulation or whether the company maintains its position—potentially sacrificing government contracts to preserve principles—will signal to other AI developers how much leverage they possess when corporate values conflict with state power.

The Wire

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Share post:

Subscribe

spot_imgspot_img

Popular

More like this
Related

Russia Shared Intelligence With Iran That Could Aid Attacks on U.S. Military Assets, AP Sources Say

 Russia has supplied Iran with intelligence that could help...

Islamic Militants Kidnap More Than 300 Civilians in Northeastern Nigeria as Insurgency Intensifies

Islamic militants abducted more than 300 civilians during coordinated...

Militants Kill 15 Soldiers in Northern Benin Attack as Jihadist Violence Spreads Across Border Region

Militants killed 15 soldiers and wounded five others in...

Evidence Points to Possible U.S. Airstrike in Deadly Blast at Iranian School That Killed Scores of Students

 (AP) — Satellite imagery, expert assessments and statements from...

DON'T MISS ANY OF OUR UPDATE