
Linux Kernel's 'Assisted-by' Tag: Sasha Levin's Secret AI Patch Sparks Contributor Guidelines
Last June at Open Source Summit, NVIDIA's Sasha Levin dropped a bombshell. He revealed that a patch he'd submitted to the Linux kernel—already merged by a maintainer—had been generated using an LLM. No disclosure. No warning. Just pure AI code sitting in the world's most important codebase.
The kernel community went ballistic.
Now we have Linux's first official AI guidelines, and they're surprisingly practical. Published right in the kernel source tree at Documentation/process/coding-assistants.rst, these rules feel like they were written by people who actually understand both AI tools and kernel development.
The New Sheriff: Assisted-by Tags
Every AI-generated contribution now needs an Assisted-by tag. Format looks like this:
1Assisted-by: ChatGPT-4 (gpt-4-0314) [GitHub Copilot] [Coccinelle]Clever move. It's transparent without being punitive.
But here's the kicker: AI agents can't sign off on their own work. Only humans can add the Signed-off-by tag that certifies the Developer Certificate of Origin. You generated it with AI? Fine. But you own it completely.
<> "AI agents cannot add Signed-off-by tags, as only humans can certify the Developer Certificate of Origin (DCO)"/>
This cuts straight through the accountability problem that's plaguing AI code everywhere else.
Intel vs NVIDIA: The Guidelines War
The backstory gets juicy. After Levin's surprise revelation:
- NVIDIA's Levin posted initial guidelines to the kernel mailing list
- Intel's Dave Hansen countered with a v3 iteration emphasizing transparency
- Lorenzo Stoakes jumped in demanding an "official kernel AI policy document" before any configs
- Michal Babka warned that configs alone might signal developers don't need to worry about AI implications
Classic kernel politics. But it worked.
What Linus Really Thinks
Torvalds has been surprisingly positive about AI assistance, specifically for reviews. He's excited about LLMs helping with the maintainer burnout problem—but not replacing human judgment.
Chris Mason (the Btrfs creator) is already working on AI tools to review recent git commits using kernel coding rules. The goal? Find bugs faster and reduce reviewer load.
Smart approach: treat AI like a really good intern who handles the drudgery.
The Technical Reality Check
These guidelines solve real problems:
- GPL-2.0-only compliance: All AI code must use SPDX identifiers
- Human oversight: You review everything, period
- Full responsibility: Your signature means you own the consequences
- Tool integration: Works with existing tools like Coccinelle and smatch
By late 2025, the 2025 Maintainers Summit had two AI proposals submitted within milliseconds of the call opening. The community is clearly hungry for this.
The Broader Implications
This isn't just about kernel development. Companies like NVIDIA and Intel are pushing AI-optimized hardware into Linux constantly. Having clear contribution guidelines removes friction from that process.
More importantly, this sets precedent for every other Linux Foundation project. Ubuntu forums are already calling for "official policy ASAP" to prevent chaos.
The kernel's approach feels refreshingly pragmatic compared to the usual AI hype cycle. No grandiose claims about replacing developers. No fear-mongering about code quality collapse.
Just: Use AI if it helps. Disclose it. Own the results.
My Bet: These guidelines become the gold standard for open source AI policies within 18 months. The Assisted-by tag format gets adopted by major projects across the Linux ecosystem, and we look back on Levin's "secret patch" incident as the moment AI contribution policies grew up.

