OpenAI's DoD Sellout: 295% Uninstall Spike Proves Ethics Matter
# OpenAI's DoD Sellout: 295% Uninstall Spike Proves Ethics Matter
OpenAI just learned the hard way: cozying up to the military kills consumer trust. In a stunning backlash, ChatGPT app uninstalls skyrocketed 295% right after the company inked a deal with the U.S. Department of Defense (DoD)—or "Department of War" as critics aptly dub it—on February 28, 2026.[web:0] Meanwhile, rival Anthropic's Claude saw downloads surge, even overtaking ChatGPT in Apple's App Store rankings by March 1. This isn't just data; it's a consumer revolt against AI hypocrisy.
Let's unpack the fiasco. OpenAI publicly cheered Anthropic's ethical stand—rejecting a DoD deal over fears of mass surveillance and killer robots—hours before flipping to sign its own. CEO Sam Altman admitted on X it was "definitely rushed" with optics that "don't look good," but defended it as a de-escalation play. The contract boasts "more guardrails than any previous," like cloud-only API deployment to block edge tech for autonomous weapons, explicit bans on domestic mass surveillance, and human oversight for force. OpenAI even pushed the DoD to extend terms to all labs, including shunned Anthropic, now branded a "supply chain risk" by Trump-era feds.
<> "If we are right and this does lead to a de-escalation... we will look like geniuses. If not, we will continue to be characterized as rushed and uncareful." —Sam Altman/>
Bold prediction: OpenAI won't look like geniuses. Critics like Techdirt's Mike Masnick shred the safeguards, noting Executive Order 12333 lets the NSA snag U.S. data abroad under "lawful purposes"—a vague loophole the contract doesn't seal. Tech investor Aidan Gold called out the hypocrisy: OpenAI backed Anthropic's red lines, then caved. Social media exploded with #CancelChatGPT, Redditors sharing data-export guides, branding OpenAI "ethics-free soul-sellers."
For developers, this is a wake-up call. The deal mandates cloud APIs with safety stacks—no edge deployment for weapons, technical experts in the loop—setting precedents but demanding new compliance like human oversight protocols. Sure, it unlocks classified networks, but at what cost? OpenAI risks subscription bleeds while positioning as the "responsible" DoD partner amid Anthropic's ban.
My take: OpenAI bet big on government goodwill and lost the people. Users aren't buying "layered safeguards" when history screams military AI slippery slopes. Claude's rise shows ethics-driven AI wins markets—Anthropic's constitutional principles are paying off. If OpenAI wants redemption, prove the guardrails with transparency, not blog posts. Otherwise, this 295% spike is just the start of a user exodus. Developers, take note: build with integrity, or watch your apps get uninstalled.
(Word count: 512)

