Open source maintainers have a new enemy: AI-generated pull requests that look plausible but are fundamentally broken. The community's response is the 406 protocol, a set of conventions for identifying and handling AI-generated contributions. Named after the HTTP "Not Acceptable" status code. I love the pettiness.
The problem is real
I maintain a couple of small open source projects. Nothing huge. Maybe 200 stars between them. Even at that scale, I've noticed the shift. In the last 6 months, I've received 14 pull requests that were clearly AI-generated. I could tell because they all had the same fingerprints.
The code compiles. The tests pass (when there are tests). The PR description is suspiciously well-written. But the actual changes? They "fix" problems that don't exist. They refactor code in ways that look clean but break subtle edge cases. They add features nobody asked for using patterns that don't match the rest of the codebase.
One PR rewrote my entire error handling system. It looked great. It also silently swallowed three categories of errors that my monitoring depends on. If I'd merged it without deep review, I wouldn't have known until production broke.
Reviewing these PRs takes longer than reviewing human-written ones because the code passes the sniff test. It looks right. The bugs are subtle.
What the 406 protocol actually proposes
The protocol is still evolving, but the core ideas are:
Disclosure requirement. If AI tools were used to generate the PR, say so. Not because AI assistance is bad, but because it changes how the maintainer needs to review the code.
Maintainer opt-out. Projects can add a 406 declaration to their contributing guidelines stating they don't accept fully AI-generated PRs. Partially AI-assisted is fine. Fully generated with no human understanding of the changes is not.
Review flags. Automated tools that detect AI-generated code patterns and flag them for additional review. Think of it like a linter for contribution authenticity.
Why I'm conflicted
On one hand, I get it. Maintainers are volunteers. Their time is finite. Reviewing garbage PRs that waste that time is a real cost to the ecosystem. If you send a PR without understanding what it does, you're not contributing. You're creating work.
On the other hand, drawing a line between "AI-assisted" and "AI-generated" is genuinely hard. I use AI tools when I write code. Most developers do in 2026. If I ask an AI to help me write a function, then review it, understand it, test it, and modify it, that's AI-assisted. But where exactly does "assisted" end and "generated" begin?
The 406 protocol doesn't have a clean answer for this. Nobody does.
What this really signals
The open source community is grappling with something bigger than PR quality. It's a question about what contribution means.
For 30 years, contributing to open source meant you understood the code. You'd read the codebase, found a problem, thought about a solution, and implemented it. The PR was evidence of understanding.
AI breaks that assumption. Someone can generate a PR for a project they've never read, in a language they don't know, solving a problem they don't understand. The artifact looks the same. The understanding behind it is completely different.
I think the 406 protocol is an imperfect first attempt at solving a real problem. It'll evolve. Parts of it will be abandoned. But the underlying tension between AI-enabled contribution and maintainer sustainability isn't going away.
My take: disclose your AI usage. Review what you submit. If you can't explain every line of your PR when asked, don't submit it. That was good advice before AI. It's essential now.