AI Doesn't Reduce Work, It Intensifies It

I've been using AI coding tools daily since Codex was first released. I ship faster than I ever have. I also feel more drained at the end of the day than I ever have.

I assumed it was just me. Turns out it's not.

The Study Everyone Should Read

In February 2026, UC Berkeley researchers Aruna Ranganathan and Xingqi Maggie Ye published what might be the most important AI workplace study so far. They spent eight months embedded at a 200-person tech company, doing in-person observation twice a week, monitoring internal communication channels, conducting 40 interviews across engineering, product, design, research, and operations. They weren't surveying what people thought about AI. They were watching what actually happened.

What they found was that nobody was told to do more. The company didn't mandate AI use. But people did more anyway, because AI made doing more feel possible. PMs started writing code. Researchers took on engineering tasks. Everyone expanded into adjacent roles because the barrier to entry dropped.

The researchers called it "workload creep." Workers filled knowledge gaps and absorbed colleagues' responsibilities. Work bled into lunch breaks, evenings, and early mornings. People multitasked more across parallel AI workflows. The work got faster, so people took on more of it. The scope expanded. The hours expanded. And it was voluntary, which made it harder to push back against.

A separate DHR Global survey of 1,500 professionals put a number on it: 83% report experiencing burnout, with overwhelming workloads and excessive hours as the top drivers. The tech industry had one of the highest rates of moderate-to-extreme burnout at 58%.

Andrew Ng Is Tired Too

Andrew Ng said it plainly at the LangChain Interrupt conference: after a full day of AI-assisted coding, he's "exhausted by the end of the day." He pushed back on the term "vibe coding" specifically because it implies the work is casual. It's not. It's a deeply intellectual exercise where you're constantly evaluating, correcting, and steering generated output.

If Andrew Ng, the guy who has been preaching AI productivity for years, admits the work is exhausting, maybe we should listen.

The Cognitive Load Didn't Disappear. It Moved.

Here's what I think is happening. AI removed the parts of the job that were physically slow but cognitively light. Typing code. Looking up syntax. Writing boilerplate. Those tasks took time, but they didn't take much mental energy. They were almost meditative. You could autopilot through a lot of it.

What replaced them is cognitively heavy. You're reading AI-generated code and deciding if it's correct. You're catching subtle bugs in code you didn't write and don't fully understand. You're making judgment calls about architecture suggested by a model that doesn't know your system. Every interaction is a decision point.

The Harness State of Software Delivery report backs this up. 67% of developers say they spend more time debugging AI-generated code. 59% experience deployment errors at least half the time when using AI tools. The execution got faster. The cleanup got bigger.

This is the swap: you traded typing effort for interpretation effort. Mechanical work for cognitive work. And cognitive work doesn't have a natural off switch. You can stop typing. You can't stop thinking about whether that function the AI wrote handles the edge case correctly.

The Perception Gap Makes It Worse

The METR randomized controlled trial found that experienced open-source developers were 19% slower when using AI tools. But here's the part that matters for burnout: those same developers believed AI had made them 20% faster. That's a 39-percentage-point gap between perception and reality.

You feel fast. You feel productive. And because you feel productive, you keep going. You take on more. You say yes to the next feature because surely you can knock it out, you've got AI helping. The perception of speed becomes the justification for more work.

I wrote about this perception gap in Vibe Coding Works Until It Doesn't. The feeling of productivity is doing real damage because it masks the cognitive cost.

Roles Are Blurring and Nobody Asked For It

The UC Berkeley study found that people voluntarily expanded into adjacent roles. This isn't isolated. Figma's Shifting Roles report found that 64% of product team members now identify with two or more roles, and 72% cite AI tools as the reason.

PMs are building prototypes. Designers are writing CSS. Engineers are making product decisions. AI made the boundary-crossing feel easy, so people crossed.

The problem is that "can do" turned into "expected to do." Once your PM ships a working prototype with an AI coding agent, the expectation resets. Now that's part of the PM job. The role expanded, but the title, comp, and headcount didn't. You just absorbed someone else's work on top of your own.

I talked about this role compression in The Bottleneck Moves Up the Stack. Andrew Ng has talked about PM-to-engineer ratios shifting dramatically, potentially more PMs than engineers as AI handles more of the implementation. What nobody mentions is that those PMs aren't doing less PM work. They're doing PM work plus engineering work. The ratio compressed, but the total work expanded.

The Jevons Paradox of Knowledge Work

There's an economic concept called the Jevons Paradox. When you make a resource more efficient to use, people don't use less of it. They use more. Steam engines got more fuel-efficient, so people used more coal, not less.

Aaron Levie, CEO of Box, made this connection to AI explicitly: "By making it far cheaper to take on any type of task that we can possibly imagine, we're ultimately going to be doing far more."

He's right, and he's saying it like it's a good thing. But from the developer's seat, "doing far more" isn't a feature. It's a treadmill. The bar rises to meet the new capacity. AI makes you 2x more productive, so now you're expected to deliver 2x the output. The efficiency gain goes to the company, not to you.

This is what Upwork found when they surveyed 2,500 workers: 77% of employees using AI say it has increased their workload. Not decreased. Increased. And 47% of them don't even know how to achieve the productivity gains their employers expect. The tool that was supposed to save time is creating more work.

High-Functioning Burnout

There's a specific kind of burnout here that's hard to catch. You're still shipping. Your PRs are still flowing. Velocity charts look great. From the outside, you look more productive than ever.

But you're running on cognitive fumes. Decision quality degrades. You rubber-stamp the AI's suggestion because you're too tired to think critically about it. You skip the edge case review because you've been reviewing AI output for eight hours straight and your brain is done. The output stays high while the quality silently erodes.

The ICSE 2026 paper surveyed 442 developers and found that GenAI adoption heightens burnout specifically by increasing job demands. The mechanism isn't mysterious. More capability means more is expected, which means more decisions per day, which means more cognitive drain.

Ranganathan and Ye put it clearly: what looks like higher productivity in the short run masks silent workload creep and growing cognitive strain. The productivity surge at the beginning gives way to lower quality work and turnover.

What We Lost

Here's what I miss about the old way of working. Typing code was slow. Looking things up was slow. That slowness was a natural governor on work pace. You couldn't burn out from typing because your body would stop you. You'd hit a compile cycle and stare at the ceiling for 30 seconds. You'd flip to Stack Overflow and get distracted by a tangentially related answer.

Those weren't inefficiencies. They were recovery time. Micro-breaks that your brain used to consolidate decisions, process context, and reset. AI removed the slow parts and filled them with more decision-making. The breaks are gone. The pace is continuous. And continuous high-cognitive-load work is not sustainable, no matter how good the tooling is.

The UC Berkeley researchers recommend companies develop an "AI practice," intentional norms around AI use that include structured pauses, task sequencing, and deliberate human interaction. I'd simplify it: if you're using AI coding tools, you need to actively protect time where you're not making decisions about AI output. The tool won't pace you. You have to pace yourself.

The Uncomfortable Question

We keep measuring AI's impact on velocity. Features shipped. PRs merged. Lines of code. But nobody's measuring the cost to the people producing that output. The burnout data is starting to come in, and it tells a different story than the productivity dashboards.

AI doesn't reduce work. It compresses the slow parts and backfills them with more work. The total cognitive load goes up, not down. And the people who adopt it the hardest are the ones most at risk.

Maybe the right metric isn't how fast you ship. Maybe it's how long you can sustain the pace.