The Bottleneck Moves Up the Stack
First It Was the Code. Then the Specs. Eventually It'll Be the Humans.
In my last post I wrote about code becoming an intermediate artifact, how the source of truth in software development is shifting from code to natural language as AI tools get more capable. That post was about what happens to the artifact. This one is about what happens to the people.
Because as the code gets easier to produce, the bottleneck doesn't disappear. It moves. And where it's heading is somewhere most people haven't thought through yet.
The Ratio Is Collapsing
Andrew Ng has been talking about this for months now, and his numbers keep getting more extreme because reality keeps getting more extreme.
The traditional software team runs something like six or seven engineers to one product manager. The PM decides what to build, the engineers build it. That ratio existed because writing code was the slow, expensive part. Most of the team's time was spent in implementation. Product management was one person's job because defining what to build was fast relative to building it.
Agentic coding tools blew that up. Ng watched the ratio compress in real time on his own teams. Six engineers per PM became four. Then two. Then one-to-one. And now he's seeing teams where the ratio has actually inverted: two product managers for every one engineer.
That's not a typo. Two PMs per engineer. Because the engineer, armed with agentic coding tools, can build so fast that a single PM can't keep them fed with well-defined work. The implementation bottleneck evaporated and revealed the bottleneck that was always hiding behind it: deciding what to build.
Ng put it in a way that stuck with me. He compared it to the typewriter and writer's block. The typewriter made the physical act of writing faster, but that didn't make people write more. It just made "what should I write?" the new hard problem. Agentic coding is doing the same thing. The builder's block is real. The hard part isn't building anymore. It's knowing what to build.
I Watched This Happen from the Inside
In January, I was at OpenAI's HQ in San Francisco. They demoed what they were internally calling Hermes, which launched publicly on February 5 as OpenAI Frontier. I wrote about it in my last post post.
What struck me watching that demo wasn't the technology. It was the implication for roles. A person in that room described what they wanted in plain English. The system figured out the agents, tools, MCP servers, and routing. It built the workflow. It could eval and optimize itself.
Nobody in that room was writing code. Nobody was configuring infrastructure. Nobody was doing what we'd traditionally call "engineering." They were doing product work. Defining intent. Describing outcomes. The system handled everything between the intent and the execution.
Now scale that forward. If the implementation step keeps collapsing, and it will, the ratio between people who define work and people who implement it doesn't just change. It changes on a curve that tracks the acceleration of AI capabilities.
The Progression
Think about it as stages. We're moving through them faster than most people realize.
Right now, most teams are in the early compression. Engineers are maybe 2-3x more productive with AI tools, depending on the task and how honestly they're measuring. The PM-to-engineer ratio is shifting but most organizations haven't adjusted their headcount or structure yet. You probably still have the same team composition you had two years ago, even though your engineers are shipping faster.
The next phase is what Ng is already seeing: the inversion. Engineers get fast enough that PMs become the constraint. You need more people deciding what to build than people building it. The valuable skill shifts from "can you implement this?" to "can you define this precisely enough that the implementation happens correctly?" That's a different skill. Some engineers have it. Many don't. Some PMs are great at it. Many aren't.
This is where the people who can bridge product thinking and technical understanding become disproportionately valuable. They can define what to build with enough technical precision that the AI produces the right thing, and they can evaluate whether the output matches the intent. Ng has been saying this for a while. The hybrid PM-engineer is the most valuable person on the team. Not the best coder. Not the best product thinker. The person who can do both.
But keep going. After the inversion, there's a weirder phase that I haven't heard anyone talk about yet.
The Market Becomes the Bottleneck
Here's where it gets strange.
If we keep compressing the time from "idea" to "shipped feature," we eventually outpace the market's ability to absorb those features and provide feedback.
Think about how product development actually works. You ship a feature. Users try it. Some of them give you feedback, most through behavior rather than words. You analyze usage patterns. You figure out what to build next based on what you learned. That feedback loop is the engine that drives good product development.
That loop has a speed limit, and it's not set by your engineering team. It's set by your users. Humans adopt new features at a human pace. They need time to discover a feature exists, figure out if it's relevant to them, learn how to use it, integrate it into their workflow, and then develop opinions about what's missing or broken. That process takes weeks or months regardless of how fast you shipped the feature.
Right now, most teams ship slowly enough that the feedback loop has plenty of time to complete. By the time you ship the next feature, you've had time to learn from the last one. The market can keep up with your cadence.
But we're heading toward a world where a well-equipped team could ship features daily or faster. At that point, you're pushing features out the door faster than your users can evaluate them. You're not learning from the market anymore. You're just guessing faster.
There's actually data that hints at this already. Benchmarking surveys show users engage with only about 6% of product features. Six percent. That number exists in a world where we're already shipping faster than users can absorb. When engineering velocity goes up another 5x or 10x, that number doesn't go up. It probably goes down.
Feature Velocity vs. Learning Velocity
This is the distinction that matters, and I think most people are missing it.
Feature velocity is how fast you can ship code. That's the metric the entire industry is optimizing for right now. Faster CI/CD pipelines. Agentic coding tools. Automated testing. Everything is about shipping faster.
Learning velocity is how fast you can discover what to ship. That's the metric that actually determines whether your product succeeds. And it's constrained by the feedback loop with your users, which moves at a human pace.
Right now, feature velocity and learning velocity are roughly coupled. You ship, you learn, you ship again. The cycle is slow enough that one doesn't outrun the other.
But as AI tools push feature velocity to 10x or 100x what it is today, the two decouple. You can ship instantly, but you can't learn instantly. The feedback loop becomes the bottleneck, and it's the one bottleneck that AI can't remove because it depends on humans doing human things at a human pace.
You end up in a bizarre situation: hyper-efficient engineering systems sitting idle, waiting for the market to tell them what to do next. The fastest build pipeline in the world doesn't help if you're building the wrong thing. And you won't know if you're building the wrong thing until your users tell you, which takes as long as it takes.
What This Actually Looks Like
I think this plays out in three waves.
First, the eng ratio compression. This is happening now. Teams get smaller because engineers get more productive. Some orgs resize. Most just expect more output from the same headcount. The smart ones start shifting toward more product-focused roles.
Second, the PM bottleneck. This is what Ng is describing. Engineering gets so fast that product definition can't keep up. Organizations that adapted early are hiring more PMs, user researchers, and people who can define work precisely. Organizations that didn't adapt are shipping a lot of features that nobody asked for because the engineers are fast enough to build whatever seems like a good idea.
Third, the market speed limit. This hasn't hit yet, but it will. The organizations that figure out how to maximize learning velocity rather than feature velocity will win. That means smaller experiments. Faster feedback mechanisms. More direct user contact. A/B testing everything. Treating deployed features as hypotheses rather than deliverables.
The irony is that the best use of a hyper-efficient AI engineering system might be running fifty small experiments simultaneously rather than building one big feature. Ship ten versions of a feature to different user segments, see which one sticks, kill the rest. The implementation cost of that approach used to be prohibitive. It won't be for long.
The Roles That Survive
So what happens to the people?
Engineers who only write code are already feeling the pressure. That pressure increases as the tools improve. But engineers who understand systems, who can define constraints and interfaces, who can evaluate whether AI-generated output actually meets the intent, those people are in the position Ng is describing. They're the bridge.
Product managers who can only write Jira tickets are in trouble too. If the AI can go from a well-written spec to a working system, the value of the PM is in the quality of the spec, not the process around it. PMs who deeply understand users, who can make fast product decisions with incomplete information, and who have enough technical fluency to evaluate output are the ones who thrive.
The new premium role is something that doesn't have a clean name yet. It's part product owner, part systems thinker, part experiment designer. Someone who can articulate intent precisely, design experiments to validate that intent, and evaluate the results. They don't need to write code. They don't need to manage a backlog. They need to understand what the market wants, describe it well, and learn fast.
Ng calls this the era of "builder's block." I think it's more than that. It's the era where the question shifts from "can we build this?" to "should we build this?" and eventually to "can anyone tell us what to build next?"
The Timeline
This doesn't happen overnight. The ratio compression is happening now. The PM bottleneck will become obvious in the next year or two as agentic coding tools mature. The market speed limit is probably three to five years out, depending on how quickly engineering velocity actually accelerates.
But all three stages are on the same curve. They're consequences of the same force: AI making implementation cheaper and faster. Each stage reveals the next bottleneck. Code was the bottleneck. Then specs. Then product decisions. Eventually, the humans using the software.
The organizations that see this coming and restructure proactively will have an advantage. The ones that keep optimizing for feature velocity when the constraint is learning velocity will ship a lot of features that nobody uses.
We've spent decades optimizing the pipeline from idea to production. We've gotten very good at building things fast. We're about to discover that building fast was the easy problem all along. The hard problem is knowing what to build. And eventually, the hard problem is waiting for the world to catch up.
Andrew Ng says the people who can bridge product thinking and engineering are the most valuable. I agree. But I'm curious: what happens when even that bridge becomes unnecessary? When the AI can go from user feedback directly to shipped features without a human in the loop? That's the question I can't answer yet.