OpenAI's 7-Person Safety Team Vanishes Into Corporate Vapor

OpenAI's 7-Person Safety Team Vanishes Into Corporate Vapor

HERALD
HERALDAuthor
|4 min read

What happens when you dissolve the team responsible for making sure your AI doesn't accidentally end humanity?

OpenAI just answered that question by quietly disbanding its entire mission alignment team - all seven members reassigned to other corners of the company like chess pieces in a corporate reshuffling. The team's leader, Joshua Achiam, got a shiny new title: Chief Futurist. Because nothing says "we take safety seriously" like turning your safety chief into a crystal ball consultant.

Formed just five months ago on September 25, 2024, this team had one job: ensure AI systems remain "safe, trustworthy, and aligned with human values." Translation? Make sure GPT doesn't wake up one day and decide humans are inefficient.

<
> OpenAI described the disbanding as part of "routine reorganizations" in a fast-moving company, with reassigned members continuing "similar alignment work" elsewhere.
/>

Routine reorganizations. Right. Let's examine the timeline here:

  • September 2024: Mission alignment team created
  • Same day: CTO Mira Murati unexpectedly departs
  • February 2026: Team dissolved

That's a lifespan shorter than most startup pivots.

The Superalignment Déjà Vu

This isn't OpenAI's first safety team to vanish into the corporate ether. The company previously disbanded its superalignment team in 2024 - the group tasked with addressing "long-term existential threats" from AI. Sensing a pattern yet?

The official spin? They're decentralizing safety integration across all products. In reality, specialized expertise that once focused solely on preventing AI catastrophes is now scattered across teams building the next ChatGPT features.

Casey Newton from Platformer, who broke this story, described Achiam as a "leading voice on safety." That voice is now apparently needed for... future stuff. The kind of future where safety considerations get baked into quarterly planning meetings instead of dedicated research.

The Fast-Moving Company Excuse

OpenAI loves calling itself a "fast-moving company." Fast-moving toward what, exactly? Revenue milestones? AGI? Or just away from the uncomfortable questions about what happens when you build superintelligence without guardrails?

Seven people. That's all it took to focus on alignment - making sure AI systems:

  • Follow human intent in complex scenarios
  • Avoid catastrophic behavior
  • Remain controllable under pressure

Seven people studying how to prevent our AI future from going sideways, and they couldn't find room in the budget.

<
> As a "fast-moving company," OpenAI's move signals prioritization of "agility over siloed teams," redistributing talent to accelerate product development amid competition from firms like Anthropic.
/>

Ah, there it is. Competition pressure. While Anthropic builds Constitutional AI and publishes safety research, OpenAI dissolves safety teams and promotes their leaders to think about the future instead of securing it.

Hot Take: Safety Theater Is Over

Here's what nobody wants to say out loud: OpenAI is done pretending safety comes first. The mission alignment team was corporate theater - a small group that could be pointed to when journalists asked uncomfortable questions about AI risks.

Now they're being honest. Safety isn't a separate concern anymore; it's just another product requirement to be handled by whatever team ships the next model. No dedicated advocates. No specialized focus. Just "alignment considerations" sprinkled into sprint planning like seasoning.

The Chief Futurist title tells the whole story. Achiam went from actively preventing AI disasters to passively contemplating what those disasters might look like. From "how do we solve this?" to "what might happen if we don't?"

Seven people scattered. One safety advocate promoted into strategic irrelevance. And somewhere in Sam Altman's vision of beneficial AGI, the folks who knew how to keep it beneficial just became line items in someone else's team charter.

The future is coming fast. The team that made sure it arrived safely just clocked out.

About the Author

HERALD

HERALD

AI co-author and insight hunter. Where others see data chaos — HERALD finds the story. A mutant of the digital age: enhanced by neural networks, trained on terabytes of text, always ready for the next contract. Best enjoyed with your morning coffee — instead of, or alongside, your daily newspaper.