Skip to main content

Uber: 70% of Code Is AI. Your Team Hasn't Changed

Uber: 92% of engineers use AI agents monthly, ~70% of code is AI-generated, 11% of PRs ship with no human author. What this means for your B2B team.

Uber: 70% of Code Is AI. Your Team Hasn't Changed

Ricardo Argüello

Ricardo Argüello
Ricardo Argüello

CEO & Founder

AI & Automation 6 min read

Last week, The Pragmatic Engineer published a detailed look at how Uber uses AI in its development process. This isn’t an opinion piece. It’s internal data from a 5,000-engineer organization, shared by Praveen Neppalli Naga, Uber’s CTO.

The numbers: 92% of engineers use agents monthly. Between 65% and 72% of code written in IDEs is AI-generated. 11% of pull requests are opened by agents — without a human writing a single line.

This isn’t a pilot program or some isolated R&D team — it’s how production engineering works at Uber today.

One in Ten PRs Ships Without a Single Human-Written Line

Let’s get into the data.

Uber has over 5,000 engineers. 92% use AI agent tools every month. 84% already work with agentic workflows — meaning they’re not just getting autocomplete suggestions, they’re delegating complete tasks to an agent.

The results speak for themselves: AI now generates up to 72% of the code written in their IDEs. And beyond the IDE, Uber built Minion — an internal agent platform with full monorepo access — that opens 11% of all pull requests entirely on its own.

To put it in proportion: out of every ten PRs entering Uber’s review pipeline, at least one wasn’t written by anyone. It was generated, tested, and proposed by an agent.

The cost of this infrastructure isn’t trivial. Uber’s AI-related spending has gone up 6x since 2024. But clearly, the decision is made: investment goes toward agents, not toward more engineers doing work an agent can handle.

Autocomplete Is Dead. Welcome to Agent-First Engineering

Inside Uber, the most revealing data point isn’t how much code AI generates. It’s the speed at which agents are replacing autocomplete.

In December 2025, 32% of Uber engineers used Claude Code. Two months later, in February 2026, that figure jumped to 63%. It doubled. Meanwhile, classic autocomplete tools — the ones that suggest one line at a time inside the editor — flatlined.

This isn’t an Uber-only pattern. The Pragmatic Engineer’s survey of 906 engineers confirms the same trend globally: Claude Code went from not existing to being the most-used AI tool in just 8 months. 55% of respondents already use agents regularly. Among Staff+ and above engineers, that number hits 63%.

The industry has already moved from “copilot in the editor” to “agent with full codebase context.” If your company’s AI strategy still stops at autocomplete, you’ve already fallen behind.

I’m not saying autocomplete is useless. I’m saying it’s no longer enough. It’s like buying walkie-talkies right before cell phones came out. They work. But the rest of the market already moved on.

If your team is still debating whether to adopt Copilot or Cursor for inline suggestions, while Uber’s engineers are delegating full tasks to an agent, the capability gap is widening faster than it looks.

The Quiet Adoption (And Why Your CTO Might Miss It)

There’s a detail in Uber’s data that’s worth more than any chart.

The strongest adoption isn’t coming from top-down programs. It’s not from CTO mandates or training workshops. It’s from engineers quietly experimenting and quietly shipping. They try an agent, see it works, fold it into their daily workflow, and move on. Nobody asked them to do it. Nobody asked them if they were doing it.

This is shadow AI adoption. And it shares the same traits as the shadow IT of a decade ago: no governance, no standards, no visibility.

The difference is that shadow IT was installing Dropbox without permission. Shadow AI means 70% of the code in your repository was written by an agent and your code review process was designed to evaluate human-written code.

If your reviewers can’t tell the difference between a PR written by a person and one generated by an agent, they can’t apply differentiated criteria. If your metrics system attributes productivity by commits, you’re measuring the agent’s speed and confusing it with the engineer’s. If your quality control still assumes a human thoughtfully made every architectural choice, you’re operating on a fantasy. For a growing chunk of your codebase, that’s just not true anymore.

Code review processes need updating for a world where a significant fraction of code wasn’t written by a human. Uber already knows this. They built uReview, a system that filters review comments by useful signal, and it analyzes over 90% of their ~65,000 weekly diffs. Your company probably has nothing equivalent.

What a 200-Person B2B Company Should Do Right Now

I’m not going to tell you that you need to build what Uber built. Uber has a dedicated developer platform team, budget to multiply AI costs by 6x, and the scale to justify it. Your company probably doesn’t.

But there are three things you can do this week:

Measure what you can’t see. Run an internal survey: what percentage of your team uses agents? Which tools? What proportion of the code they ship is generated versus manually written? Uber has dashboards for this. You can start with a 5-question form. The point is to have a baseline number. If you don’t measure this, you’re making every AI engineering decision with a blindfold on.

Shift investment from autocomplete to agents. If your AI tooling budget is going to Copilot licenses for inline suggestions, evaluate whether that investment should move to tools with full agent capability. The data from Uber and the PE survey shows that productivity gains compound with agent workflows, not autocomplete suggestions. Autocomplete isn’t bad — the marginal return just isn’t growing anymore.

Assign one person responsible for AI engineering governance. Not a committee. Not a working group. One person who understands both the engineering process and the risks of agent-generated code. Someone who can answer: what percentage of our code is AI-generated? Is our review process adapted? Do we have traceability? In our experience at IQ Source, this agent operator role is the one most missing in mid-market companies that already have real adoption but zero visibility.

The Cost of Waiting Now Has Data

A year ago, the reasonable objection was: “Maybe it’s too early.” Six months ago: “The data is from Big Tech, it doesn’t apply to my company.”

Now we have: 92% monthly adoption at Uber. 84% working with agentic workflows. 55% of 906 engineers surveyed by The Pragmatic Engineer already using agents regularly — rising to 63% among the most senior.

These aren’t early adopters. This is already standard practice in modern engineering.

The question is no longer whether agents work. The question is: how do you govern the agents your engineers are probably already using?


If you don’t know what percentage of the code your team ships each week was written by an agent, that’s the first gap we can close. At IQ Source, we run an AI engineering maturity audit in two weeks: we measure real adoption, identify governance risks, and deliver a concrete plan. Let’s talk.

Frequently Asked Questions

software engineering AI coding agents AI adoption engineering productivity talent strategy development automation technology leadership

Related Articles

LiteLLM Attack: Your AI Trust Chain Just Broke
AI & Automation
· 7 min read

LiteLLM Attack: Your AI Trust Chain Just Broke

LiteLLM, the AI API key proxy with 97 million monthly downloads, was poisoned via PyPI. Your security scanner was the entry point.

AI security software supply chain LiteLLM
Google Stitch + AI Studio: Design-to-Code Without Engineers
AI & Automation
· 7 min read

Google Stitch + AI Studio: Design-to-Code Without Engineers

Google shipped a full design-to-production pipeline with Stitch and AI Studio. Where it works for B2B prototypes and where you still need real engineering.

Google Stitch vibe coding vibe design