Anthropic Wants to Be a Utility When It's Convenient
And a walled garden the rest of the time.
Anthropic tells governments that AI is so powerful it needs safety regulation. Dario Amodei (Anthropic's CEO) writes essays about how it will reshape civilization. The company spends millions lobbying for AI regulation because, in their own words, the technology is too important to leave ungoverned. They want a seat at the table where infrastructure policy gets made.
Then they legally threaten a third-party dev tool into removing the ability for paying customers to use compute they already paid for. Same model. Same tokens. Same servers. Same usage caps. The only difference was which terminal the request came from. Anthropic decided that was enough to shut it down.
Pick a lane. You're either building critical infrastructure that society depends on, or you're selling a proprietary product with a proprietary client. You don't get to wear the "this technology will change everything" hat when you're talking to Congress and the "this is our private platform, use our tools or get out" hat when you're talking to developers.
You've Heard This Argument Before
Comcast sells you internet access, then steers you toward Comcast's own streaming service by throttling competitors. Society decided that was wrong. If you're paying for bandwidth, you should be able to use it with whatever application you choose. The pipe is the pipe.
Anthropic is selling compute, then dictating which client can consume it. The Max plan is the pipe. Claude Code is Comcast's streaming service.
The analogy isn't perfect. Anthropic isn't a utility in the legal sense, and they aren't a monopoly the way a regional ISP is. But the spirit of the complaint holds: when the underlying resource is identical and the limits are identical, restricting the client is about lock-in, not resource management.
What This Looks Like on the Ground
This week, Anthropic's legal team forced OpenCode to rip out OAuth token support. Over 350 developers downvoted that PR overnight. People are already building workaround plugins to restore what was taken away. And it appears Anthropic went further, server-side blocking existing tokens even for users running older versions of OpenCode that still had the integration. They'd warned people back in January that third-party token use violated their ToS, so this wasn't a surprise. But the enforcement pattern tells you something about priorities.
I pay $200 a month for a Claude Max plan. The inference cost to Anthropic is identical whether I use Claude Code or OpenCode. The usage limits are identical. The cost per token is identical. The only thing that changed was the client binary making the request. That was enough for Anthropic to send lawyers after an open source project and then apparently block tokens at the server level for good measure.
So I switched to an API key at API rates. Two prompts. 43,000 tokens of Opus 4.6. An agent helped me plan a GitLab upgrade across my homelab cluster. My bill went from $0 to $0.90. For two prompts. Extrapolate that across a full day of agentic coding work and the API costs dwarf the $200 Max plan. Anthropic knows this. The Max plan is priced to be the obvious choice for heavy users, and then the only client allowed to use it is theirs. Use our client or pay ten times more. That's not a pricing model. That's a compliance mechanism.
And it's certainly a subsidized one. Anthropic has raised $64 billion in funding and is still burning billions annually. Their gross margin sits around 40%, down from projections because inference costs came in 23% higher than expected. The Max plan is almost certainly a loss leader, priced below cost to drive adoption. This is the playbook. Subsidize access now, get developers building their workflows deep inside Claude Code, and bet that by the time Anthropic needs to be profitable, the switching costs will be high enough that users just absorb the price increase. It worked for Uber. It worked for Amazon. The Congressional antitrust subcommittee flagged Amazon for exactly this pattern: pricing below cost to build dependency, then leveraging that dependency once competitors are gone.
Three Labs Control the Frontier
Here's where the monopoly angle gets stronger than most people want to admit.
There are three serious LLM providers right now: OpenAI, Anthropic, and Google. Between them they control roughly 79% of enterprise LLM spend. That's it. That's the market.
And unlike early internet infrastructure where the protocol was open and ISPs were interchangeable pipes, with LLMs the model is the product. You can't swap providers the way you'd swap ISPs and still reach the same internet. Each model has different strengths and different tool ecosystems built around it.
This is exactly where Anthropic's client restriction becomes borderline anti-competitive. A model agnostic tool like OpenCode lets you switch providers with a config change. My beads setup, my multi-agent architecture, my context management solutions all work regardless of which model is behind them. If Anthropic ships a bad release or Google drops something better tomorrow, I change one line and keep working. That's how competitive markets are supposed to function.
Claude Code doesn't work that way. It's welded to Anthropic's models. If you build your workflow around Claude Code and a better model appears somewhere else, you don't just switch models. You abandon your entire tooling setup, rebuild your workflow in a different harness, and then switch back again when Anthropic catches up. The switching cost isn't the model. It's the client. And Anthropic is the one creating that switching cost by bundling the client with the compute.
The barrier to entry makes this worse. You can't garage-startup a frontier model. The compute costs alone make this an oligopoly for the foreseeable future. Three companies control the technology that an increasing share of knowledge work depends on, and that number isn't going up anytime soon.
"It's Too Early to Call AI a Utility"
Some will argue that framing is premature. The internet took decades to reach utility status. Fair enough. But the numbers tell a different story.
ChatGPT hit 100 million users in two months. The web took seven years to reach the same number. A Harvard Kennedy School study found that generative AI reached 39.5% adoption in two years, double the internet's 20% adoption rate over the same period. OpenAI went from zero to $20 billion in annual revenue in under three years. For context, Yahoo did $67 million in its third year.
In late 2024, Claude was a useful chat assistant. By mid-2025, it was an autonomous coding agent operating across repos. Now I'm running multiple named agents through Discord on a dedicated Mac Mini. That's not early-adopter novelty. That's workflow infrastructure.
Developers felt it first, but legal, finance, healthcare, and education are right behind. The adoption curves are compressing because the barrier to entry is a text box. The internet required hardware rollouts, ISP buildouts, digital literacy campaigns spanning years. AI required a browser tab.
The Companies Know the Timeline
Here's what makes the "it's too early" argument fall apart: the companies themselves are operating on the fast timeline.
Anthropic isn't spacing out Claude releases on a three-year cadence. In 2026 alone they've been shipping major capability updates roughly every two weeks. They're pricing Max plans and pushing Claude Code adoption now because they know the land grab is happening now. If they believed this was a five-year slow burn, they wouldn't be moving this aggressively on developer tooling lock-in.
Their own release cadence is the strongest evidence against "it's too early." They're building walled gardens at a pace that only makes sense if they believe AI will be essential infrastructure within a year, not a decade.
Client Restrictions Are Anti-Competitive Bundling
Even the weaker version of this argument is damning. If AI is just critical professional infrastructure, not a full utility, restricting which clients can consume your paid compute still looks like the kind of bundling practice that regulators eventually step into.
A three-player oligopoly makes it worse. When there were dozens of ISPs, restrictive practices were annoying but you could switch. When three companies control frontier AI and each one locks you into their proprietary client, the switching cost stops being about the model. It's about the tooling. And the tooling lock-in is artificial. It's created by the provider, not required by the technology.
Anthropic is building a moat around Claude Code using their model as the bait. Bundling compute access with a proprietary client creates dependency that goes beyond model quality. They're not competing on the strength of the model alone. They're competing on the cost of leaving.
And they're doing it while losing money on every Max subscriber. The subsidized pricing isn't generosity. It's an investment in lock-in. The $200 plan gets you hooked. Claude Code makes sure you can't leave. When Anthropic eventually needs to turn a profit, and they will, they're betting that unwinding your entire workflow costs more than whatever they decide to charge you next.
The Telecommunications Act Moment Is Now
The time to make this argument is before the lock-in is complete, not after. By the time everyone agrees AI is a utility, the walled gardens will already be built. Every month that passes without this conversation happening at a regulatory level is another month of entrenchment.
We're watching the same playbook that ISPs ran in the early 2000s, compressed into months instead of years. AI will be regulated as critical infrastructure eventually. The only question is whether that happens before or after three companies have locked the entire knowledge economy into their respective ecosystems.
Right now, nobody's writing the bill. So write it yourself.
Cancel your Max plan. When Anthropic asks why, tell them. A friend of mine cancelled last night and put "OpenCode" as his reason. If enough people do that, it shows up in a churn report on someone's desk. Speak with your wallet, because that's the only language a company burning billions a year actually listens to.
And contact your representatives. The House Energy and Commerce Committee and the Senate Commerce Committee are the bodies that would oversee AI platform regulation the same way they handled telecom. Tell them what's happening. Tell them three companies control the compute that an increasing share of the economy runs on, and those companies are already locking users into proprietary tooling. They won't act on this until constituents make it a priority.
The walled gardens aren't finished yet. But they will be soon.