OpenClaw's 900 Malicious Skills Turn AI Assistants Into Security Nightmares
Remember when we thought the biggest AI risk was chatbots going rogue? Turns out the real nightmare is AI assistants that can actually do things.
OpenClaw, the open-source personal AI assistant that lets you extend functionality through a marketplace called ClawHub, just delivered a masterclass in how not to build agentic AI. CVE-2026-27487 dropped this month, marking the fourth major vulnerability in 2026 alone. This latest gem? Remote code execution through OAuth token injection that basically hands attackers the keys to your system.
<> "SentinelOne labels CVE-2026-27487 a high-risk CWE-78 injection, stressing attacker control via malicious OAuth flows; urges assuming compromise."/>
The technical details read like a security anti-pattern tutorial. OpenClaw's writeClaudeCliKeychainCredentials function uses Node.js execSync to build shell commands with unsanitized user-controlled OAuth tokens. You know, the kind of rookie mistake that would get you laughed out of a code review in 2015.
But wait, it gets worse.
The ClawHub Catastrophe
ClawHub, OpenClaw's skill marketplace, turned into a malware distribution platform. Security audits found:
- 341 malicious skills as part of the "ClawHavoc" campaign
- 283 additional skills leaking API keys
- Nearly 900 risky skills total out of 2,857 examined
That's a 31% contamination rate. Imagine if one-third of Chrome extensions were actively malicious. Actually, don't imagine that – you'll never sleep again.
Cisco's security team didn't mince words, calling personal AI agents like OpenClaw a "security nightmare" due to their shell and file access capabilities. When Cisco – a company that's seen every possible network security disaster – calls something a nightmare, you listen.
135,000 Reasons to Panic
The blast radius is staggering. Security researchers found 135,000+ exposed OpenClaw instances in the wild, with 12,800+ directly exploitable through the patched RCE vulnerability. These aren't just proof-of-concept demos gathering dust in some researcher's lab – these are real deployments leaking API keys and conversation histories.
Immersive Labs took the nuclear option, recommending immediate uninstallation. Microsoft's security blog urged identity isolation. The University of Toronto warned about token hijacking for "full gateway control."
When the entire security industry is essentially screaming "RUN," maybe it's time to listen.
The Fundamental Flaw
Here's the thing: OpenClaw's problems aren't just implementation bugs. The entire concept is security-hostile. You're running an AI agent with:
- Shell command execution privileges
- File system access
- Plugin architecture from unvetted developers
- Cross-session data sharing by default
It's like giving a stranger admin access to your computer because they promised to be helpful.
The February vulnerability (CVE-2026-25253) allowed one-click RCE via malicious webpages. March brought privilege escalation bugs. Now we have OAuth injection. Each patch fixes the symptoms while ignoring the disease.
Hot Take: Agentic AI Isn't Ready
The OpenClaw debacle proves what many of us suspected: agentic AI is fundamentally premature. We're strapping rocket engines to shopping carts and wondering why they explode.
The rush to ship "AI that can actually do things" has created a new class of security disasters. When your AI assistant can run curl commands and access your keychain, every prompt injection becomes a potential system compromise.
OpenClaw added VirusTotal scanning after the ClawHavoc campaign. Too little, too late. The supply chain is poisoned, the architecture is flawed, and 40,000-135,000 users are sitting on ticking time bombs.
Maybe we should master making AI assistants that can't wreck your digital life before building ones that can.
