
Claude Code's 15-Minute Problem: Why Boris Tane Never Lets AI Plan Without Permission
Here's the uncomfortable truth: Your AI coding assistant is probably wasting 15 minutes of your time right now, building something technically perfect but completely wrong for your project.
Boris Tane figured this out the hard way. His solution—a rigid workflow that never lets Claude Code write a single line until a human reviews the plan—just earned 208 points and 124 comments on Hacker News. Not because it's revolutionary, but because it acknowledges what we all know but hate admitting: AI makes "reasonable-but-wrong assumptions" that cost us hours to unwind.
The Real Story
Tane's approach isn't about fancy prompts or clever tricks. It's about control. His flowchart reads like a paranoid project manager's dream: Research → Plan → Annotate (1-6 iterations) → Todo List → Implement → Feedback. Zero shortcuts allowed.
The magic happens in those annotation cycles. When Claude proposes something technically sound but contextually idiotic, Tane can reject it before the AI burns through tokens building the wrong thing. Smart. Cynical. Effective.
<> "Never let Claude write code until a written plan is reviewed and approved"/>
This isn't just methodology—it's acknowledgment that Claude Code lacks full project context. It doesn't know your product direction, company culture, or that weird legacy system everyone pretends doesn't exist.
What Everyone Else Misses
The HN crowd is treating this like gospel, with users like "ramoz" sharing their own Gemini Pro implementations using status.md files. But here's what they're not saying: this workflow exists because AI coding tools are still fundamentally unreliable for architectural decisions.
Claude Code offers Think modes with 4,000 to 32,000 tokens, plus something called "Ultrathink" for design. Impressive specs. But Tane's entire system is built around the assumption that more thinking power doesn't equal better judgment.
The implementation is brutally simple:
- "Implement it all... mark it as completed in the plan document... continuously run typecheck"
- Parallel sessions via desktop app or web VMs for larger projects
- Hierarchical files and symlinks to organize complex codebases
The Token Economics
Here's where it gets interesting. Tane claims this minimizes token usage by preventing do-overs. That's the kind of efficiency talk that makes CTOs pay attention in a world where AI costs are climbing fast.
YouTube videos are already emerging about "Stop Babysitting Your AI Agent," covering PRDs, task dependencies, and sub-agents to avoid "context rot." The market is clearly hungry for structure in AI-assisted development.
Why This Actually Matters
Upsun is positioning similar methodologies as transforming AI from "eager juniors" to "rockstar developers." Bold claim. Questionable metaphor. But the underlying point stands: structured workflows make AI tools enterprise-ready.
The 124 HN comments aren't just praise—they're evidence that developers are tired of babysitting AI. They want guardrails, not more features.
The Verdict
Tane's workflow isn't groundbreaking—it's necessary. In an industry obsessed with AI capabilities, he's focused on AI reliability. That's refreshingly honest.
Will this become standard practice? Probably. Should it? Absolutely.
The fact that we need elaborate human oversight systems to make AI coding tools useful tells you everything about where we really are in this hype cycle. Not at the promised land of autonomous development. Still in the messy middle, building better cages for our digital assistants.
At least Tane's cage has a plan.
