
The Real Problem With AI-Assisted Coding
A developer's plea on Hacker News struck a nerve: they're migrating a jQuery/Django project to SvelteKit, expecting AI to cut each route's work from hours to 15-20 minutes. Reality hit hard—every AI-generated component still needed 1-2 hours of manual fixes. Sound familiar?
The thread exploded with practical advice that challenges the "AI will code for you" hype. Here's what actually works.
Why Your AI Prompts Fail
Most developers treat AI coding assistants like magic wands. "Convert this to SvelteKit" they type, expecting production-ready code. What they get is syntactically correct garbage that breaks their styles, ignores their patterns, and misses edge cases.
<> "AI is great at pattern matching, but it needs patterns to match against."/>
The insight is brutal but accurate: AI amplifies your specifications, not your intentions. Vague prompts produce vague code. The fix isn't better AI—it's better inputs.
The CLAUDE.md Strategy
The highest-rated advice involves maintaining a CLAUDE.md file in your project root. This isn't documentation—it's a context injection system that gives AI the patterns it needs.
1# CLAUDE.md
2
3## Code Style
4- Use TypeScript strict mode
5- Prefer `const` over `let`
6- Component files use PascalCase
7- Utility functions use camelCase
8With this file in place, every AI session starts with the right context. No more explaining your conventions repeatedly.
Plan Mode: The Secret Weapon
One commenter's workflow change cut their iteration time by 70%: aggressive use of Plan Mode before writing any code.
Instead of asking AI to "convert this component," they break it down:
11. Analyze the jQuery component - what state does it manage?
22. Map event handlers to Svelte equivalents
33. Identify API dependencies
44. Outline the load function data requirements
55. Propose the component structureThe AI produces a plan. The developer reviews, adjusts, asks clarifying questions. Only when the plan looks solid does execution begin.
This front-loads the thinking. Code generation becomes a formality.
Concrete Examples Beat Descriptions
Telling AI "use our coding style" is useless. Showing it works:
1// BAD PROMPT:
2// "Convert this to a Svelte component with good practices"
3
4// GOOD PROMPT with example:
5// "Convert this to a Svelte component. Here's our pattern:
6
7<script lang="ts">
8 import { onMount } from 'svelte';The AI now has a concrete target. It will mirror your imports structure, your TypeScript patterns, your conditional rendering approach.
The Verification Loop
AI-generated code looks correct. It often isn't. The HN thread emphasized building verification into your workflow:
1# After each AI generation:
2npm run lint # Catch style violations
3npm run check # TypeScript errors
4npm run test # Unit tests
5npm run dev # Visual verificationOne developer automated this with a pre-commit hook that runs all checks. If AI code doesn't pass, it doesn't get committed.
Better yet: write tests first. When AI generates code that must pass existing tests, the feedback loop tightens dramatically. TDD isn't just for humans—it constrains AI behavior.
Session Hygiene Matters
Context windows have limits. As your session grows, older context fades. Developers reported better results with fresh sessions for each distinct task rather than marathon conversations.
The mental model: treat each session like a new developer joining the project. Give them the CLAUDE.md file, the specific task, and the relevant code snippets. Nothing more.
The Opus vs Sonnet Decision
Model selection came up repeatedly. For refactoring work, several commenters strongly preferred Opus over Sonnet:
- Opus: Better at understanding project-wide patterns, more thorough planning
- Sonnet: Faster for isolated, well-defined tasks
The recommendation: use Opus for initial planning and complex migrations, switch to Sonnet for iterations and fixes.
Realistic Expectations
The thread's most sobering comment: "If you're spending 1-2 hours fixing AI code, you're not doing it wrong. That's the actual speed improvement."
Pre-AI, that route might have taken 4-6 hours. AI brought it to 2-3. That's still a win—just not the magical "10 minutes per route" fantasy.
AI doesn't eliminate engineering judgment. It accelerates work within defined boundaries.
Your Action Plan
- Create a
CLAUDE.mdfile with your project's conventions - Use Plan Mode aggressively before generating code
- Provide concrete code examples, not descriptions
- Build verification into your workflow (lint, type-check, test)
- Start fresh sessions for distinct tasks
- Write tests first when possible
- Set realistic expectations—50% time savings is excellent
The developers who succeed with AI aren't finding magic prompts. They're applying the same engineering discipline they use everywhere else: clear specifications, small iterations, and continuous verification.
The tool is powerful. The skill is knowing how to wield it.

