Uber: 70% of Code Is AI. Your Team Hasn't Changed
Ricardo Argüello — March 17, 2026
CEO & Founder
General summary
Uber: 5,000 engineers, 92% use agents monthly, 65-72% of IDE code is AI-generated, 11% of PRs opened by agents with zero human authoring. The data kills the doubt: coding agents are already writing production code at scale. The question for your company isn't whether to adopt agents — it's how to govern the ones your team is probably already using.
- 65-72% of code written from IDEs at Uber is AI-generated; their internal agent Minion opens 11% of all PRs — with no human author
- Claude Code usage at Uber doubled from 32% to 63% in two months, while autocomplete tools flatlined
- 84% of Uber engineers already work with agent workflows; The Pragmatic Engineer's survey confirms 55% of engineers industry-wide use agents regularly
- The strongest adoption comes from engineers experimenting quietly — without visibility or governance, your company faces unmonitored code risk
- The concrete step: measure what percentage of code is agent-generated, shift investment from autocomplete to agents, and assign governance responsibility to one person
Imagine a taxi fleet where 70% of rides are already handled by autonomous vehicles, but the company still trains drivers as if everyone drove manually. Passengers reach their destination, but no one in management knows how many trips were made by a human versus a machine. That's what's happening in engineering teams: agents are already writing production code, but most companies don't measure how much or govern how.
AI-generated summary
Last week, The Pragmatic Engineer published a detailed look at how Uber uses AI in its development process. This isn’t an opinion piece. It’s internal data from a 5,000-engineer organization, shared by Praveen Neppalli Naga, Uber’s CTO.
The numbers: 92% of engineers use agents monthly. Between 65% and 72% of code written in IDEs is AI-generated. 11% of pull requests are opened by agents — without a human writing a single line.
This isn’t a pilot program or some isolated R&D team — it’s how production engineering works at Uber today.
One in Ten PRs Ships Without a Single Human-Written Line
Let’s get into the data.
Uber has over 5,000 engineers. 92% use AI agent tools every month. 84% already work with agentic workflows — meaning they’re not just getting autocomplete suggestions, they’re delegating complete tasks to an agent.
The results speak for themselves: AI now generates up to 72% of the code written in their IDEs. And beyond the IDE, Uber built Minion — an internal agent platform with full monorepo access — that opens 11% of all pull requests entirely on its own.
To put it in proportion: out of every ten PRs entering Uber’s review pipeline, at least one wasn’t written by anyone. It was generated, tested, and proposed by an agent.
The cost of this infrastructure isn’t trivial. Uber’s AI-related spending has gone up 6x since 2024. But clearly, the decision is made: investment goes toward agents, not toward more engineers doing work an agent can handle.
Autocomplete Is Dead. Welcome to Agent-First Engineering
Inside Uber, the most revealing data point isn’t how much code AI generates. It’s the speed at which agents are replacing autocomplete.
In December 2025, 32% of Uber engineers used Claude Code. Two months later, in February 2026, that figure jumped to 63%. It doubled. Meanwhile, classic autocomplete tools — the ones that suggest one line at a time inside the editor — flatlined.
This isn’t an Uber-only pattern. The Pragmatic Engineer’s survey of 906 engineers confirms the same trend globally: Claude Code went from not existing to being the most-used AI tool in just 8 months. 55% of respondents already use agents regularly. Among Staff+ and above engineers, that number hits 63%.
The industry has already moved from “copilot in the editor” to “agent with full codebase context.” If your company’s AI strategy still stops at autocomplete, you’ve already fallen behind.
I’m not saying autocomplete is useless. I’m saying it’s no longer enough. It’s like buying walkie-talkies right before cell phones came out. They work. But the rest of the market already moved on.
If your team is still debating whether to adopt Copilot or Cursor for inline suggestions, while Uber’s engineers are delegating full tasks to an agent, the capability gap is widening faster than it looks.
The Quiet Adoption (And Why Your CTO Might Miss It)
There’s a detail in Uber’s data that’s worth more than any chart.
The strongest adoption isn’t coming from top-down programs. It’s not from CTO mandates or training workshops. It’s from engineers quietly experimenting and quietly shipping. They try an agent, see it works, fold it into their daily workflow, and move on. Nobody asked them to do it. Nobody asked them if they were doing it.
This is shadow AI adoption. And it shares the same traits as the shadow IT of a decade ago: no governance, no standards, no visibility.
The difference is that shadow IT was installing Dropbox without permission. Shadow AI means 70% of the code in your repository was written by an agent and your code review process was designed to evaluate human-written code.
If your reviewers can’t tell the difference between a PR written by a person and one generated by an agent, they can’t apply differentiated criteria. If your metrics system attributes productivity by commits, you’re measuring the agent’s speed and confusing it with the engineer’s. If your quality control still assumes a human thoughtfully made every architectural choice, you’re operating on a fantasy. For a growing chunk of your codebase, that’s just not true anymore.
Code review processes need updating for a world where a significant fraction of code wasn’t written by a human. Uber already knows this. They built uReview, a system that filters review comments by useful signal, and it analyzes over 90% of their ~65,000 weekly diffs. Your company probably has nothing equivalent.
What a 200-Person B2B Company Should Do Right Now
I’m not going to tell you that you need to build what Uber built. Uber has a dedicated developer platform team, budget to multiply AI costs by 6x, and the scale to justify it. Your company probably doesn’t.
But there are three things you can do this week:
Measure what you can’t see. Run an internal survey: what percentage of your team uses agents? Which tools? What proportion of the code they ship is generated versus manually written? Uber has dashboards for this. You can start with a 5-question form. The point is to have a baseline number. If you don’t measure this, you’re making every AI engineering decision with a blindfold on.
Shift investment from autocomplete to agents. If your AI tooling budget is going to Copilot licenses for inline suggestions, evaluate whether that investment should move to tools with full agent capability. The data from Uber and the PE survey shows that productivity gains compound with agent workflows, not autocomplete suggestions. Autocomplete isn’t bad — the marginal return just isn’t growing anymore.
Assign one person responsible for AI engineering governance. Not a committee. Not a working group. One person who understands both the engineering process and the risks of agent-generated code. Someone who can answer: what percentage of our code is AI-generated? Is our review process adapted? Do we have traceability? In our experience at IQ Source, this agent operator role is the one most missing in mid-market companies that already have real adoption but zero visibility.
The Cost of Waiting Now Has Data
A year ago, the reasonable objection was: “Maybe it’s too early.” Six months ago: “The data is from Big Tech, it doesn’t apply to my company.”
Now we have: 92% monthly adoption at Uber. 84% working with agentic workflows. 55% of 906 engineers surveyed by The Pragmatic Engineer already using agents regularly — rising to 63% among the most senior.
These aren’t early adopters. This is already standard practice in modern engineering.
The question is no longer whether agents work. The question is: how do you govern the agents your engineers are probably already using?
If you don’t know what percentage of the code your team ships each week was written by an agent, that’s the first gap we can close. At IQ Source, we run an AI engineering maturity audit in two weeks: we measure real adoption, identify governance risks, and deliver a concrete plan. Let’s talk.
Frequently Asked Questions
According to Uber's internal data published by The Pragmatic Engineer in March 2026, between 65% and 72% of code written from IDEs like Cursor and IntelliJ is AI-generated. Additionally, 11% of all pull requests are opened directly by an agent with no human intervention. 92% of Uber's 5,000 engineers use agents every month.
Autocomplete suggests individual lines inside the editor while you type. An autonomous coding agent receives a complete task, accesses the repository, writes code, runs tests, and can open a pull request without human intervention. At Uber, Claude Code usage — an agent — doubled from 32% to 63% in two months while autocomplete tools stalled.
Three metrics Uber already tracks: percentage of commits generated by agents versus written manually, proportion of engineers using agent workflows versus autocomplete only, and number of pull requests opened by autonomous agents. Uber measures this through its internal platform. Any 50-person engineering team can collect this data in a week with an internal survey.
Agent-generated code enters production without review processes adapted to evaluate it. Risks include lack of traceability over what a human wrote versus an agent, absence of differentiated quality criteria for generated code, and architectural decisions made by agents without business context. If you don't measure it, you can't govern it.
Related Articles
LiteLLM Attack: Your AI Trust Chain Just Broke
LiteLLM, the AI API key proxy with 97 million monthly downloads, was poisoned via PyPI. Your security scanner was the entry point.
Google Stitch + AI Studio: Design-to-Code Without Engineers
Google shipped a full design-to-production pipeline with Stitch and AI Studio. Where it works for B2B prototypes and where you still need real engineering.