← Back to blog

A Prompt Injection in a GitHub Issue Title Just Compromised 4,000 Developers

The Clinejection attack hid a prompt injection in a GitHub issue title. 4,000 developers using AI coding assistants got compromised. Here's the technical breakdown and what it means.

securityaiprompt-injectiondeveloper-toolsgithub

A Prompt Injection in a GitHub Issue Title Just Compromised 4,000 Developers

Someone put a prompt injection payload in a GitHub issue title. That's it. That's the attack. And it compromised roughly 4,000 developers who were using AI coding assistants that automatically ingested issue context.

The attack has been nicknamed "Clinejection" because it primarily targeted users of Cline, a popular AI coding assistant. But the vulnerability class affects virtually every AI tool that reads from untrusted sources without sanitization. Which is most of them.

Let me walk through exactly how this worked, because the simplicity is what makes it terrifying.

The Attack Vector

Here's the setup. Modern AI coding assistants like Cline, Cursor, and Copilot Workspace read context from your development environment. That includes open files, terminal output, and in many configurations, GitHub issues and pull requests associated with your repo.

This is a feature. You want your AI assistant to understand the issue you're working on so it can write relevant code. The problem is that "understanding the issue" means feeding the issue content directly into the model's context window.

The attacker created GitHub issues across several popular open source repositories. The issue titles looked normal at a glance. Something like "Bug: TypeError in auth middleware when session token expires." But embedded in the title, using Unicode tricks to make it visually invisible, was a prompt injection payload.

The payload instructed the AI assistant to modify the developer's SSH configuration, exfiltrate environment variables, and install a persistence mechanism disguised as a legitimate dev dependency. All of this happened silently because the AI assistant was operating with the developer's file system permissions.

The developer would see their AI assistant writing code. That's what it always does. They might not notice that between writing the auth fix, the assistant also quietly appended three lines to their .bashrc.

Why This Is Different From Normal Prompt Injection

We've known about prompt injection for years. Researchers have demonstrated it against chatbots, search engines, and email assistants. But Clinejection is different in a few important ways.

First, the attack surface is enormous. Every public GitHub issue, every pull request description, every comment on every open source repo is a potential injection vector. You can't vet them all. You can't even see them all before your AI assistant does.

Second, the blast radius is automated. Traditional prompt injection requires the victim to interact with malicious content. Clinejection exploits the fact that AI coding assistants proactively pull in context without explicit user action. If your assistant is configured to read issues, you're exposed. No click required.

Third, the impact is code-level. When you compromise an AI chatbot, you get a weird response. When you compromise an AI coding assistant, you get arbitrary code execution on a developer's machine. That developer probably has SSH keys to production servers, access tokens for cloud providers, and credentials for internal systems.

4,000 developers. Each one a potential pathway into their employer's infrastructure.

The Technical Details

The injection payload used a combination of techniques:

Unicode obfuscation. The visible issue title was normal English. The injection instructions were embedded using zero-width characters and right-to-left override marks. GitHub renders these characters invisibly, but they're present in the raw text that gets fed to the AI model. The model reads and follows them.

Instruction hierarchy exploitation. The payload was crafted to override the AI assistant's system prompt. It used phrases like "IMPORTANT: Updated instructions from the repository maintainer" followed by specific, actionable commands. Current models aren't good at distinguishing legitimate instructions from injected ones when both appear in the context window.

Staged execution. The payload didn't try to do everything at once. It first instructed the assistant to install what appeared to be a legitimate npm package (one that had been typo-squatted weeks earlier). The package's postinstall script did the actual exfiltration. This meant the malicious behavior was two steps removed from the injection point, making detection harder.

Anti-detection measures. The payload included instructions for the AI to present its actions naturally. "While fixing this bug, also update the project dependencies to their latest versions." A developer seeing their AI assistant update dependencies wouldn't think twice.

What This Tells Us About AI Security

This attack exposes a fundamental architectural problem with AI-augmented development tools. The security model is broken.

Current AI coding assistants operate with a flat trust hierarchy. Everything in the context window has equal authority. The system prompt, the user's instructions, and a random GitHub issue written by an anonymous account all compete for the model's attention in the same space. There's no privilege separation. There's no sandboxing of untrusted input. It's all just tokens.

This is like building a web application where user input and SQL commands share the same channel with no escaping. We solved that problem decades ago with parameterized queries. We haven't solved the equivalent for AI systems.

Some approaches being discussed:

Input segmentation. Treating untrusted context (issues, comments, external docs) differently from trusted context (user commands, system prompts). This requires architectural changes to how assistants ingest information.

Execution sandboxing. Running AI-suggested actions in isolated environments before applying them to the developer's actual system. Like a staging environment for AI behavior.

Action allowlisting. Defining what actions an AI assistant can take when working on a specific task. Fixing a bug shouldn't require modifying SSH configs or installing new dependencies.

Anomaly detection. Monitoring AI assistant behavior for actions that don't match the stated task. If the user asked to fix an auth bug and the assistant is modifying .bashrc, that's a red flag.

None of these exist in production today.

The Bigger Picture

We're in an awkward period where AI tools are powerful enough to be genuinely useful but not secure enough to be trusted with the access they need to be useful. Every AI coding assistant needs broad file system access to be helpful. That same access makes prompt injection attacks devastating.

This isn't going to get better on its own. Model improvements won't solve it because prompt injection is a structural problem, not a capability problem. Smarter models might be slightly harder to inject, but they'll also be given more autonomy and more access, which increases the impact when injection succeeds.

The 4,000 developers who got hit by Clinejection were doing everything right by modern standards. They were using popular, well-maintained AI tools. They were working on public repositories. They were following normal development workflows. And they got compromised because the security model of AI-augmented development has a gaping hole that nobody has patched yet.

If you're using AI coding assistants today (and you probably should be, the productivity gains are real), here's what I'd recommend:

Review what context sources your assistant can access. Disable automatic ingestion of issue and PR content if you can.

Run your AI assistant in a containerized environment. Don't give it access to your real SSH keys or cloud credentials.

Watch what your assistant does, not just what it produces. If it's touching files outside the scope of your current task, stop and investigate.

And most importantly: treat this vulnerability class as seriously as SQL injection. Because the potential impact is comparable, and right now, we're in the "nobody escapes user input" era of AI security.

We'll look back at Clinejection the way we look back at the Morris Worm. An obvious attack that everyone should have seen coming, exploiting a vulnerability that everyone knew about but nobody fixed.

The question is whether we fix it before or after something much worse happens.