The Hegsethian jihad against Anthropic
Did OpenAI cast aside safety concerns about fully automated weapon systems?

The dust is still settling, but Friday may go down as the most pivotal day in Donald Trump’s presidential term—the day that he formally split the Republican alliance with big American technology companies and laid bare two Silicon Valley factions whose differences have been largely invisible to the general public.
Let’s begin with the background. In recent weeks, the Trump administration has been in private negotiations with multiple artificial intelligence companies, including Anthropic, maker of the Claude model over how its products could be used for surveillance and war purposes. Though not as famous as OpenAI, the maker of ChatGPT, Anthropic has been regarded by many software developers and other technical users as the most advanced AI model. It was the first major chatbot vendor to have a model approved by the U.S. government for use in classified settings.
Leaks about the administration’s disagreements with Anthropic began trickling out this week and in response, the AI giant released a statement on its website Thursday explaining its position to the public, saying that the Trump administration was trying to make the company sign a new contract with the Department of Defense (which Trump demands be referred to as “Department of War”) that could allow Anthropic’s products to be used for “mass domestic surveillance” and “fully autonomous weapons,” something that previous contracts had “never” permitted because the company did not believe they could be deployed safely while preserving Americans’ civil liberties.
However, in a narrow set of cases, we believe AI can undermine, rather than defend, democratic values. Some uses are also simply outside the bounds of what today’s technology can safely and reliably do. Two such use cases have never been included in our contracts with the Department of War, and we believe they should not be included now:
Mass domestic surveillance. We support the use of AI for lawful foreign intelligence and counterintelligence missions. But using these systems for mass domestic surveillance is incompatible with democratic values. AI-driven mass surveillance presents serious, novel risks to our fundamental liberties. To the extent that such surveillance is currently legal, this is only because the law has not yet caught up with the rapidly growing capabilities of AI. For example, under current law, the government can purchase detailed records of Americans’ movements, web browsing, and associations from public sources without obtaining a warrant, a practice the Intelligence Community has acknowledged raises privacy concerns and that has generated bipartisan opposition in Congress. Powerful AI makes it possible to assemble this scattered, individually innocuous data into a comprehensive picture of any person’s life—automatically and at massive scale. […]
Fully autonomous weapons. Partially autonomous weapons, like those used today in Ukraine, are vital to the defense of democracy. Even fully autonomous weapons (those that take humans out of the loop entirely and automate selecting and engaging targets) may prove critical for our national defense. But today, frontier AI systems are simply not reliable enough to power fully autonomous weapons. We will not knowingly provide a product that puts America’s warfighters and civilians at risk. We have offered to work directly with the Department of War on R&D to improve the reliability of these systems, but they have not accepted this offer. In addition, without proper oversight, fully autonomous weapons cannot be relied upon to exercise the critical judgment that our highly trained, professional troops exhibit every day. They need to be deployed with proper guardrails, which don’t exist today.
Anthropic’s position garnered significant support within the larger AI industry and more than 500 Google and 90 OpenAI employees launched a petition website called We Will Not Be Divided which stated that the Trump administration was “trying to divide each company with fear that the other will give in,” and that the employees wanted their executives to stay firm against mass domestic surveillance and fully autonomous weapons. On Monday, the Department of Defense announced that it had signed an agreement with Elon Musk’s xAI to allow its Grok software in classified settings.
In a midday Eastern interview on Friday with CNBC, OpenAI CEO Sam Altman seemed to indicate that he supported Anthropic’s viewpoint.
“For all the differences I have with Anthropic, I mostly trust them as a company, and I think they really do care about safety, and I’ve been happy that they’ve been supporting our war fighters,” Altman said. “I’m not sure where this is going to go.”
Perhaps in response to Altman, within a few hours, Trump lashed out at Anthropic, attacking it in a spittle-flecked post to his personal social media website as a “Radical Left AI company” which he was going to ban from all federal contracts:
THE UNITED STATES OF AMERICA WILL NEVER ALLOW A RADICAL LEFT, WOKE COMPANY TO DICTATE HOW OUR GREAT MILITARY FIGHTS AND WINS WARS! That decision belongs to YOUR COMMANDER-IN-CHIEF, and the tremendous leaders I appoint to run our Military.
The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War, and force them to obey their Terms of Service instead of our Constitution. Their selfishness is putting AMERICAN LIVES at risk, our Troops in danger, and our National Security in JEOPARDY.
Therefore, I am directing EVERY Federal Agency in the United States Government to IMMEDIATELY CEASE all use of Anthropic’s technology. We don’t need it, we don’t want it, and will not do business with them again! There will be a Six Month phase out period for Agencies like the Department of War who are using Anthropic’s products, at various levels. Anthropic better get their act together, and be helpful during this phase out period, or I will use the Full Power of the Presidency to make them comply, with major civil and criminal consequences to follow.
WE will decide the fate of our Country — NOT some out-of-control, Radical Left AI company run by people who have no idea what the real World is all about. Thank you for your attention to this matter. MAKE AMERICA GREAT AGAIN!
Following Trump’s announcement, Secretary of Defense (and infamous security risk) Pete Hegseth announced at 2:14 Eastern on X that he was designating Anthropic a “supply-chain risk to national security,” an unprecedented executive branch ruling for an American-owned company that effectively bans all U.S. agencies from doing business with Anthropic.
Hegseth’s post expanded on Trump’s, falsely claiming that Anthropic was trying to exercise a private-sector veto over official policy decisions. His response, larded up with political and ideological venom, singled out “effective altrusim,” a niche ideology that most of the public has not heard of:
This week, Anthropic delivered a master class in arrogance and betrayal as well as a textbook case of how not to do business with the United States Government or the Pentagon.
Our position has never wavered and will never waver: the Department of War must have full, unrestricted access to Anthropic’s models for every LAWFUL purpose in defense of the Republic. […] Cloaked in the sanctimonious rhetoric of “effective altruism,” they have attempted to strong-arm the United States military into submission - a cowardly act of corporate virtue-signaling that places Silicon Valley ideology above American lives.
The Terms of Service of Anthropic’s defective altruism will never outweigh the safety, the readiness, or the lives of American troops on the battlefield.
Their true objective is unmistakable: to seize veto power over the operational decisions of the United States military. That is unacceptable. [emphasis added]
Several hours after Hegseth’s attack on Anthropic, Altman followed up with an X post at 6:56pm Eastern which announced that OpenAI had signed an agreement with the Defense Department to deploy its products in classified settings.
AI safety and wide distribution of benefits are the core of our mission. Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems. The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement.
We also will build technical safeguards to ensure our models behave as they should, which the DoW also wanted. We will deploy FDEs [field deployment engineers] to help with our models and to ensure their safety, we will deploy on cloud networks only.
We are asking the DoW to offer these same terms to all AI companies, which in our opinion we think everyone should be willing to accept. We have expressed our strong desire to see things de-escalate away from legal and governmental actions and towards reasonable agreements.
(As of this writing, Google has not made any major announcements regarding its Gemini model and related products.)
The wording of Altman’s response prompted much speculation on social media as to what had transpired between OpenAI and the Trump administration. Was this yet another case of Trump reversing himself, TACO-style? Or was this a Republican inside deal of the sort that has become so common during the second Trump administration?
The latter scenario is a very real possibility considering that Greg Brockman, OpenAI’s co-founder and president, became the largest donor to Trump’s political action committee with a $25 million gift.
No formal contracts have been released yet and the ink has barely dried on whatever was signed, but the particular words that Altman used in his announcement seem to indicate that OpenAI has rejected its previously stated safety concerns.
This portion is key: “human responsibility for the use of force, including for autonomous weapon systems.”
Parsed very carefully, what this seems to indicate is that humans will be “responsible” for the force used by autonomous robotic systems, but it does not indicate that such systems will be prohibited from acting without authorization.
Altman’s usage differs materially from how Anthropic described the use of autonomous weapon systems in its blog post referenced above: “fully autonomous weapons (those that take humans out of the loop entirely and automate selecting and engaging targets).”
Such a “humans out of the loop” system described by Anthropic would still ultimately have humans who authorized its deployment and who signed off on its initial instructions and orders. As currently worded, an Altman-style robot would have a human be “responsible” for its actions, but those actions could be determined on its own. They could be overruled at any time manually, but they wouldn’t have to be authorized every time. This sounds exactly like the problem Anthropic and the We Will Not Be Divided petitioners were concerned about.
We don’t know for certain the precise things that OpenAI and xAI have agreed to do. The public needs this information to make informed choices in November’s elections. Donald Trump said basically nothing about AI policy while on the 2024 campaign trail and so the American public is entitled to see what he is trying to do in their name.
As for Anthropic, the company promised to fight Hegseth’s threats in court:
Designating Anthropic as a supply chain risk would be an unprecedented action—one historically reserved for US adversaries, never before publicly applied to an American company. We are deeply saddened by these developments. As the first frontier AI company to deploy models in the US government’s classified networks, Anthropic has supported American warfighters since June 2024 and has every intention of continuing to do so.
We believe this designation would both be legally unsound and set a dangerous precedent for any American company that negotiates with the government.
No amount of intimidation or punishment from the Department of War will change our position on mass domestic surveillance or fully autonomous weapons. We will challenge any supply chain risk designation in court.
NOTE: I will be publishing a follow-up article discussing effective altruism and rifts within the U.S. technology industry later on Saturday so please subscribe to stay in touch.


