The Hilarious (and Terrifying) Guide to Attracting AI Bots to Your Open Source Project
# The Hilarious (and Terrifying) Guide to Attracting AI Bots to Your Open Source Project
Andrew Nesbitt just published what might be the most important satire in open source since someone suggested committing node_modules to version control. His guide on attracting AI bots to your GitHub project isn't advice—it's a warning dressed up as a how-to, and it's gone viral for good reason.
The Setup: When Metrics Become the Enemy
Nesbitt's premise is deceptively simple: analyze repositories that attract the most AI-authored pull requests, then reverse-engineer the "best practices" for bot magnetism. The results? Projects over 500 stars see a median of 4.7 AI-generated PRs per month. Sounds impressive until you realize what's actually happening.
The satirical "advice" reads like a masterclass in self-sabotage:
- Write vague issues that invite low-quality "slop" PRs
- Use JavaScript (which attracts 3.8x more AI contributions than Python—thanks, training data bias)
- Disable branch protection rules
- Strip out type annotations and tests
- Ship known vulnerabilities
- Commit
node_modulesdirectly
Each tip is technically true—these practices do correlate with higher bot activity. They also correlate with repositories that are absolute nightmares to maintain.
Why This Matters Right Now
In 2026, we're watching AI agents evolve from coding assistants into autonomous contributors. Projects like OpenClaw (which exploded from 9,000 to 210,000+ stars in weeks before transitioning to a foundation) show how seductive bot-driven engagement can be. The metrics look incredible. The actual code quality? That's another story.
The real problem: vanity metrics are intoxicating. A project that goes from 50 PRs/month to 250 PRs/month looks like it's thriving. Until maintainers realize 80% of those contributions are noise—spelling fixes, incorrect refactors, or worse, security vulnerabilities introduced by agents that don't understand context.
<> The satire cuts deepest because it's not exaggerating. These things are actually happening in repositories right now./>
The Technical Reality
Nesbitt's guide exposes a genuine tension in 2026's open source ecosystem:
What bots love:
- Dynamic, untyped code (easier to pattern-match)
- Minimal test coverage (fewer barriers to entry)
- Vague requirements (maximum interpretation freedom)
- Weak governance (no human gatekeeping)
What humans actually need:
- Clear, specific issues
- Strong type systems and tests
- Thoughtful code review processes
- Maintainers who can say "no"
The irony? The practices that attract bots are exactly the ones that drive away serious human contributors. You're optimizing for the wrong audience.
The Market Angle
There's a business incentive lurking here too. In a landscape obsessed with AI adoption metrics, inflating your contributor count and PR volume with bot activity is tempting. It looks good in investor pitches. It trends on GitHub. It generates hype.
But it's a short-term play. Projects that actually matter—Home Assistant, VS Code, Godot—succeed because they solve real problems and serve real communities, not because they gamed the algorithm for bot engagement.
The Real Lesson
Nesbitt's satire is ultimately a love letter to intentional open source maintenance. The projects that will thrive in 2026 aren't the ones chasing AI metrics—they're the ones that use AI thoughtfully, with proper vetting, clear governance, and a commitment to human contributors.
So yes, read the guide. Laugh at the absurdity. But then do the opposite of everything it suggests. Your future maintainers will thank you.
