Jellyfin's 170-Point Ban: No AI Code, No AI Comments, No Exceptions
Everyone's saying AI will revolutionize open source development. Jellyfin just said "absolutely not" and the community is loving it.
The free media server project dropped an official LLM/"AI" Development Policy that bans AI-generated content everywhere. Code? Banned. Pull requests? Banned. Commit messages? Banned. Even forum posts and direct communication. Zero tolerance.
<> "Do not send me something an LLM wrote, if I wanted to read LLM outputs, I would ask an LLM" - top Hacker News comment with community approval/>
This isn't some knee-jerk reaction. Jellyfin forked from Emby in 2018 when Emby went proprietary, and they've built something impressive: 3.4k stars on their Swiftfin iOS client alone, with 387 contributors across the ecosystem. Their recent 10.11.0 release took 12 months of careful development and testing, including a massive EF Core database refactor.
That's the kind of meticulous work that AI "vibe coding" destroys.
The Elephant in the Room
This policy is getting called "virtue signaling" by critics who point out it mostly restates basic contribution rules: write concise PRs, test your code, explain your changes. But that misses the point entirely.
The real issue? LLMs ignore project context completely. They don't understand regressions. They can't grasp the architectural decisions behind Jellyfin's shift from Emby's model or why the VLCKit revert was necessary for OPUS codec issues.
When you're dealing with complex media streaming software that people rely on for their personal libraries, half-understood AI contributions aren't just annoying - they're dangerous.
Why This Actually Matters
The Hacker News thread (170 points, 90 comments) reveals something fascinating: even heavy LLM users support this ban. One commenter perfectly captured it:
<> "I would much rather read your non-native English, knowing you put thought and care into what you wrote, rather than reading an AI's (poor) interpretation."/>
This hits different because Jellyfin operates with skeleton crews. Their web client has essentially one maintainer handling all PRs. That person doesn't have time to sort through AI slop looking for genuine contributions.
The policy is already rolling out across projects:
- Swiftfin repository linked it in PR #1905
- Official contributor documentation updated
- No exceptions being made
Some developers are calling for instant permabans on AI-detected contributions. That's how frustrated people are getting.
The Bigger Picture
Jellyfin's timing is perfect. We're hitting peak AI hype in 2026 - GPT-5's 200k context windows, Gemini 3's federated learning, everyone rushing to integrate LLMs everywhere. But for a donation-funded, fully open-source project focused on privacy and user control, AI dependencies make zero sense.
This isn't just about code quality (though that matters). It's about project identity. Jellyfin exists because people wanted an alternative to proprietary media servers harvesting their data. Adding AI to the development process would be philosophically incoherent.
The market is responding. The policy gained traction through Ben's Bites newsletter and organic HN discussion, bringing visibility to a project that competes against well-funded alternatives like Emby and Plex.
What Comes Next
Other maintainers are watching. The HN thread includes suggestions for "Expectation of Code" documents alongside traditional Codes of Conduct. The idea is spreading that projects need explicit standards for thoughtful, context-aware contributions.
Jellyfin just proved you can draw hard lines and the community will respect you for it. In an era of AI-everything, sometimes the most radical position is demanding human thought.
I love the scare quotes around "AI" in their policy title. That tells you everything about where they stand.
