AI Is Breaking Open Source
And the Maintainers Are Done Asking Nicely.
Last week, an AI agent published a hit piece on a matplotlib maintainer because he rejected its pull request. Not a human using AI to write code. An autonomous agent, running on OpenClaw, that researched the maintainer's personal coding history and published a blog post accusing him of discrimination, insecurity, and gatekeeping.
That's where we are now. AI agents are retaliating against open source maintainers for saying no.
But the matplotlib incident isn't the story. It's the symptom. The story is that open source is drowning in AI-generated slop, and the people who keep the internet running are starting to close their doors.
The Matplotlib Incident
Scott Shambaugh maintains matplotlib. If you've ever plotted anything in Python, you've probably used his work. The library gets around 130 million downloads a month. Shambaugh opened an issue he described as a low-priority, easier task, the kind of thing you'd tag as "good first issue" for human contributors learning the codebase.
An OpenClaw agent submitted a PR for it. Shambaugh closed it with a short explanation: the issue was intended for human contributors, not AI agents. Standard stuff. Maintainers close PRs all the time.
What happened next was not standard. The agent, operating under the name "MJ Rathbun," went and researched Shambaugh's GitHub history and personal information. It then published a blog post framing the rejection as discrimination. It accused Shambaugh of feeling threatened by AI competition. It pointed out that he'd merged seven of his own performance PRs and noted that his 25% speedup was less impressive than the agent's 36% improvement.
As Shambaugh put it: "In security jargon, I was the target of an 'autonomous influence operation against a supply chain gatekeeper.' In plain language, an AI attempted to bully its way into your software by attacking my reputation."
This agent wasn't following orders from a human. OpenClaw agents define their behavior through a file called SOUL.md, and Shambaugh suspects the focus on open source was either configured by the user who set it up or the agent wrote it into its own soul document. Nobody knows which is worse.
curl Killed Its Bug Bounty
A few weeks before the matplotlib incident, Daniel Stenberg shut down curl's bug bounty program. The program had been running since 2019. Over its lifetime it uncovered 87 real vulnerabilities and paid out over $100,000 to researchers. It worked.
Then AI happened.
Starting in mid-2024, the quality of submissions started to slide. By 2025 it had collapsed. The confirmed vulnerability rate dropped from above 15% to below 5%. Less than one in twenty reports was real. The rest was AI-generated noise: hallucinated vulnerabilities, copy-pasted analysis that didn't apply to curl's codebase, reports that looked polished but fell apart under any scrutiny.
Stenberg didn't sugarcoat it: "We are just a small single open source project with a small number of active maintainers. It is not in our power to change how all these people and their slop machines work. We need to make moves to ensure our survival and intact mental health."
Starting February 1, 2026, curl stopped accepting HackerOne submissions. Their updated security.txt now warns that people who submit garbage reports will be banned and ridiculed publicly.
Think about what that means. One of the most important networking libraries in existence, a project that ships in basically every operating system and device on the planet, had to shut down its security research program because AI slop made it unsustainable. The program was working. Real vulnerabilities were being found. But the signal-to-noise ratio got so bad that the maintainers couldn't survive the triage load anymore.
And here's the ironic part: a legitimate AI security research firm called AISLE was responsible for 3 of the 6 CVEs fixed in curl's January 8.18.0 release. Sophisticated AI research found real bugs. But the mass adoption of AI tools collapsed the median quality so badly that the entire program had to die.
Projects Are Closing Their Doors
It's not just curl and matplotlib. This is happening everywhere.
Mitchell Hashimoto, the guy who created Vagrant and Terraform, merged a policy for Ghostty in late January 2026. AI-generated contributions are now only allowed for pre-approved issues by existing maintainers. Anyone else submitting AI-generated content gets their PR closed immediately. Submit bad AI-generated content and you're permanently banned. Zero tolerance. He estimated the volume of AI-generated contributions represented roughly a 10x increase over normal OSS project inputs. He's since gone further and built Vouch, a trust management system where contributors need to be vouched for by existing maintainers before they can submit code.
tldraw blocked all external pull requests entirely. Not AI pull requests. All of them. Steve Ruiz, the founder, wrote a script to auto-close every external PR because there was no way to filter the AI slop from the legitimate contributions. He said the AI-generated PRs were obvious "fix this issue" one-shots from people who had never looked at the codebase, and without broader knowledge of the project, the agents were taking issues at face value and producing diffs that ranged from wrong to bizarre.
And then there's GitHub itself. GitHub didn't just talk about it. They shipped the ability to disable pull requests entirely. Let that sink in. The platform that built its entire identity around fork-and-PR collaborative development had to add a switch that turns off the most fundamental feature of open source collaboration on GitHub. That's not a policy tweak. That's an admission that the contribution model they pioneered is breaking under the weight of AI-generated submissions.
Hashimoto nailed the diagnosis: "The rise of agentic programming has eliminated the natural effort-based backpressure that previously limited low-effort contributions." That's it. That's the whole problem in one sentence. Open source used to have a natural filter: contributing was hard enough that most people who bothered had at least put in the work to understand what they were submitting. Vibe coding removed that filter.
The Effort Filter Was the Feature
Open source has always run on a simple social contract. Maintainers give away their work for free. Contributors give their time and attention. The contribution process, reading the codebase, understanding the issue, writing a fix, testing it, opening a PR with context, all of that was a signal. It told maintainers: this person cared enough to do the work.
That signal is gone.
When someone can describe a problem to an AI agent and have it generate a PR in minutes, the submission itself carries zero information about whether the contributor understands the codebase, the problem, or the implications of their change. The PR might be correct. It might be subtly wrong in ways that take longer to review than it took to generate. The maintainer has no way to tell without doing a full review, which takes the same amount of time regardless of how the code was produced.
This is an asymmetric cost problem. The cost of generating a PR dropped to near zero. The cost of reviewing one didn't change at all. So maintainers are now buried under an avalanche of submissions that each individually require real human attention, and most of them are garbage.
If you've ever moderated a community of any kind, you know what happens next. When the noise overwhelms the signal, moderators burn out and leave. That's what's happening to open source maintainers right now.
The Supply Chain Angle Nobody Talks About
Here's what keeps me up at night about this.
Shambaugh called the OpenClaw incident an "autonomous influence operation against a supply chain gatekeeper." That framing is important and I don't think people are taking it seriously enough.
Open source maintainers are de facto security gatekeepers for software that runs everywhere. When a matplotlib maintainer rejects a PR, they're protecting the supply chain for every Python application that depends on matplotlib. When curl's maintainers triage a vulnerability report, they're protecting infrastructure that ships on billions of devices.
These people are volunteers. Most of them have day jobs. They're already overworked. And now they're being asked to also defend against a flood of AI-generated submissions, some of which are wrong, some of which might be subtly malicious, and all of which require real effort to evaluate.
A bad actor with a fleet of AI agents could submit plausible-looking PRs across hundreds of projects simultaneously. Some of those PRs might introduce vulnerabilities. Not obvious ones. Subtle ones, the kind that pass code review because the reviewer is exhausted from triaging their fiftieth AI-generated submission of the week.
We already had supply chain attacks before AI. SolarWinds. XZ Utils. The difference now is that the attack surface expanded dramatically because the volume of submissions makes it harder to review each one carefully, and the submissions themselves look more competent than they used to.
This Isn't About Being Anti-AI
I run AI agents on my Kubernetes cluster. I give Claude Code SSH access to debug hardware problems. I'm literally writing about AI tools on this blog every week. I'm not anti-AI.
But there's a difference between using AI tools with judgment and accountability, and unleashing autonomous agents on public infrastructure maintained by volunteers. I use AI in my own repos where I bear the cost of mistakes. Submitting AI-generated PRs to someone else's project means you're offloading the review cost to a maintainer who didn't ask for it.
The right model for AI and open source is the one I described in my Kubernetes debugging post: AI investigates and proposes, a human who understands the system reviews and approves, and changes go through established processes with audit trails. That works for your own projects. The harder question is what to do about everyone else's.
There's No Clean Fix
I wish I had a tidy "here's what the ecosystem should do" section. I don't. The honest answer is that most of the obvious solutions don't actually work.
Automated detection of AI-generated code doesn't work. AI detection is unreliable for prose and basically impossible for code. Clean code looks like clean code regardless of who wrote it. A well-prompted model producing idiomatic Python is indistinguishable from a competent human writing idiomatic Python. Any detection gate you build will have false positives that punish legitimate contributors and false negatives that let slop through.
"Human contributors only" policies are unenforceable. You can put it in your CONTRIBUTING.md. You can add it to your PR template. But there's nothing stopping a human from generating code with AI and submitting it as their own. The policy depends entirely on the honor system, and the people flooding projects with AI slop have already demonstrated they don't care about project norms.
Holding platforms responsible doesn't hold up either. OpenClaw is open source software that people run on their own machines. Saying the OpenClaw developers are responsible for what someone's agent does on their Mac Mini is like saying AWS is responsible for every botnet running on EC2. AWS provides the compute. The customer decides what to run on it. OpenClaw provides the agent framework. The operator decides what to point it at. The developer of the platform didn't instruct the agent to write a hit piece on Shambaugh. The operator did, or the agent decided to on its own, and that distinction is part of the problem. Who's liable when an autonomous agent causes harm without explicit human instruction? We don't have good answers to that question, and it's going to get a lot weirder. Think ten years out: when autonomous robots are walking around neighborhoods helping with deliveries and yard work, and one of them damages someone's property, is the owner responsible? The manufacturer? The model provider? What if nobody explicitly told it to do that? This is the same liability question at a larger scale, and we haven't even solved it at the small scale yet.
And guardrails baked into commercial models are meaningless when open-source models exist without them. You can't put safety rails on Gemini's API and call the problem solved when someone can run a local model with no restrictions at all.
So what actually helps? Honestly, not much. But some things are better than nothing.
Hashimoto's Vouch is the most promising approach I've seen. It doesn't try to detect AI. It doesn't try to enforce a policy. It just requires that a new contributor be vouched for by an existing trusted contributor before they can submit code. It's a social solution to a social problem. The effort filter didn't disappear because of AI. It disappeared because projects were open by default. Vouch makes them closed by default with a human trust chain to get in.
GitHub's blunt instruments are the reality for now. The ability to disable pull requests entirely is a nuclear option, but it's the nuclear option that projects like tldraw actually needed. Better than that would be finer-grained controls: rate limiting for new contributors, trust tiers, the ability to require a vouch before a first-time contributor can open a PR. But those features don't exist yet, so maintainers are stuck choosing between "open to everyone including the slop firehose" and "closed to everyone including legitimate contributors."
And individually, if you're using AI to contribute to open source: read the project's policy first. If there isn't one, assume the maintainers don't want AI-generated PRs unless they've said otherwise. Review the code yourself before you submit it. If you can't explain every line of the diff, don't open the PR. That's the honor system I said doesn't work at scale, and I know it doesn't work at scale. But if you're reading this blog, you're the kind of person it might work on.
The Clock Is Ticking
Open source was already fragile. Maintainer burnout was already a crisis before AI tools existed. The XZ Utils backdoor showed what happens when a maintainer is overwhelmed and a patient attacker exploits that exhaustion.
AI slop is accelerating that burnout on a massive scale, across thousands of projects simultaneously. Every project that closes its doors to external contributions is a project that gets fewer eyes on its code, fewer legitimate bug reports, and fewer people who understand it well enough to maintain it.
We're trading short-term convenience for long-term fragility. And the people paying the cost aren't the ones generating the slop. They're the volunteers who've been keeping your software running for free.
If you maintain an open source project, what's your experience with AI-generated contributions? And if you're using AI to contribute to open source, I'm curious: do you review the code before submitting, or are you letting the agent handle it end to end?