
OpenAI's Cyber Gatekeeping Scheme Costs More Than It Protects
Should the company building increasingly dangerous AI models be the same one deciding who gets to defend against them?
OpenAI just announced Trusted Access for Cyber, a program that gives "qualifying" cybersecurity professionals enhanced access to frontier AI capabilities for defensive purposes. Think malware analysis, threat actor emulation, and infrastructure stress-testing. Sounds reasonable until you realize this is the same company warning that their upcoming models could enable zero-day exploits and "complex intrusions."
The timing is fascinating. This announcement coincides with development of GPT-5.2-Codex, which OpenAI admits could reach "high" cybersecurity capability on their own Preparedness Framework. Translation: we built something potentially dangerous, so now we need elaborate gatekeeping to control who uses it.
The Trust Theater Problem
Let's dissect this "trusted access" concept. OpenAI will vet cybersecurity professionals to determine if they deserve enhanced capabilities. But who watches the watchers? The company that just created the problem is now positioning itself as the solution provider.
<> "OpenAI warns its upcoming models, potentially reaching 'high' cybersecurity capability, could enable zero-day exploits or complex intrusions if misused, prompting 'defense-in-depth' measures."/>
This defense-in-depth approach includes:
- Model training against abuse
- Red Team testing
- Access controls
- Monitoring
Notice what's missing? External oversight. Independent auditing. Transparency into the vetting process.
The Aardvark Distraction
OpenAI is also launching Aardvark, an "agentic security researcher" that scans codebases and suggests patches. It's already discovering novel CVEs in open-source projects, which sounds impressive until you remember that creating vulnerabilities and finding them are often the same skillset.
Aardvark will be free for "select non-commercial open-source efforts." How generous. OpenAI gets to decide which projects deserve protection while monetizing enterprise access to the same capabilities.
Expert Skepticism Buried
The industry reaction reveals telling nuances. Allan Liska from Recorded Future notes that while nation-state and cybercriminal AI use has increased, it "remains manageable." Jon Abbott from ThreatAware emphasizes that basic protections like patching still matter more than AI-driven threats.
Yet OpenAI's marketing machine is already positioning this as essential infrastructure for 2026 cybersecurity reality.
The Real Business Model
This isn't about cybersecurity. It's about market positioning. OpenAI is:
1. Creating artificial scarcity around powerful capabilities
2. Positioning itself as the trusted intermediary
3. Building enterprise relationships with cybersecurity firms
4. Generating regulatory goodwill through "responsible AI" theater
The Frontier Risk Council and partnerships with the Frontier Model Forum sound impressive but lack enforcement mechanisms or accountability structures.
Hot Take
OpenAI's Trusted Access program is a protection racket dressed up as responsible AI governance. They're creating the problem (powerful cyber-offensive AI) while monetizing the solution (tiered defensive access).
The real issue isn't who gets access to these capabilities—it's whether they should exist in current form at all. By building increasingly powerful models without sufficient external oversight, then creating proprietary gatekeeping systems, OpenAI is centralizing both the threat and the defense.
Better approach: Open-source the defensive capabilities, submit to independent auditing, and stop building models that require elaborate access controls in the first place.
The cybersecurity industry needs better tools. But it doesn't need OpenAI deciding who deserves protection.
