Amodei Torches Altman: OpenAI's Military Power Grab is a Sham
# Amodei Torches Altman: OpenAI's Military Power Grab is a Sham
Anthropic's CEO Dario Amodei didn't mince words: OpenAI's spin on its fresh Pentagon deal is 'straight up lies'[web:0 from query]. Hours after Anthropic walked away from contract talks on February 28, 2026—citing unbreakable bans on mass surveillance and killer robots—OpenAI pounced, inking a deal to plug its models into classified military networks. This isn't just drama; it's a seismic shift for AI devs chasing government cash while dodging doomsday scenarios.
<> "Straight up lies." – Dario Amodei on OpenAI's safeguard hype[web:0 from query]/>
Let's unpack the betrayal. Anthropic held the line, demanding explicit contract clauses against domestic spying and autonomous weapons. The Pentagon, under Defense Secretary Pete Hegseth, balked—labeling them a supply chain risk and halting Claude use across feds, right before Trump's February 28 order. OpenAI? They swooped in with "any lawful purpose" vagueness, leaning on US laws like the Fourth Amendment and FISA, plus cloud-only deploys, human-in-loop policies, and their 'safety stack'. Sam Altman even bragged these are "more guardrails than Anthropic’s," urging everyone to sign up.
Bull. OpenAI used to back Amodei—Altman cheered publicly, employees signed letters for those red lines. Now? Rushed deal (admitted "opportunistic and sloppy") timed suspiciously before a US-Iran strike, sparking protests and Altman's hasty March 2 amendments. It's a masterclass in PR pivots over principles. Amodei smells the BS: OpenAI misrepresents their fuzzy legal shields as tougher than Anthropic's rejected steel walls[web:0 from query].
For developers, this is your wake-up call.
- Cloud jail: No edge computing—your military tweaks stay server-bound, with OpenAI's cleared engineers babysitting.
- Safety stack lockdown: Can't gut alignment layers; build wrappers around baked-in blocks for surveillance or autonomy.
- Precedent set: Ditch rigid bans for 'multi-layered' law-tech hybrids, or risk blacklisting like Anthropic.
Anthropic's stance is noble but naive—$200M contract torched, growth stunted in US-China AI arms race. OpenAI wins the foothold, but at what cost? Their flip exposes how national security trumps safety when dollars (and drones) beckon. Expect pressure on safety-first firms to bend or break.
The real controversy? Enforceability. Laws don't code themselves—edge cases like 'lawful' surveillance will haunt us. Altman's memo cites statutes, but Pentagon rejected explicit bans. Developers: prioritize hybrid safeguards now. Hybrid legal-tech stacks beat purity tests in a world racing to weaponize AI. Anthropic may rejoin, but trust is shattered. This feud isn't over—it's the new normal.

