The Software Lifecycle Collapsed. Your Process Didn't.
Ricardo Argüello — March 19, 2026
CEO & Founder
General summary
Karpathy codes in English now. 100% of Nvidia uses AI coding tools. Boris Cherny, creator of Claude Code, hasn't written a line in over two months. The SDLC — requirements, design, build, test, review, deploy — collapsed. What remains is a different cycle: intent, build, observe, repeat. Context engineering replaces process, and observability becomes the last line of defense.
- The traditional SDLC didn't get faster — it stopped applying: AI agents don't follow phases because there are no phases, just intent, context, and iteration
- Requirements no longer freeze into a document: when you can generate ten versions of a feature in minutes, the spec becomes a byproduct of iteration
- An agent generates 500 PRs a day; your team reviews maybe 10 — the code review queue is a ritual forced onto a machine workflow
- Context engineering — deciding what information the agent receives — replaces sprint management as the core operational skill
- Observability becomes the last line of defense: when every prior phase collapses, what you observe in production tells you if it works
Picture a factory where every product went through five stations in order: design, cutting, assembly, testing, packaging. One day a machine arrives that does all five at once. The stations are still there, but nobody uses them — work doesn't flow that way anymore. That's what's happening with software development: AI agents don't follow your sprints because they don't need separate phases.
AI-generated summary
Three signals from the same week.
Andrej Karpathy — former AI director at Tesla, OpenAI co-founder — says he codes in English now. His favorite programming language isn’t Python anymore. It’s the language he uses to describe what he needs.
Jensen Huang said at GTC 2026 that 100% of Nvidia uses AI coding tools — Claude Code, Codex, and Cursor. Not 30%. Not a pilot program. Everyone.
And Boris Cherny, creator of Claude Code — the fastest-growing agent tool in the industry — hasn’t written a line of code in over two months.
These aren’t isolated anecdotes. They’re the same thesis from three different angles.
In 25 years of building enterprise software, we’ve lived through waterfall, agile, and DevOps at IQ Source. Each transition felt massive. What’s happening now isn’t another transition. It’s something else entirely.
Boris Tane’s Framework: There Are No Steps
Boris Tane put it in terms worth repeating — and Shane Spencer amplified it with a reading that connects the dots.
His argument: AI agents don’t follow the SDLC (Software Development Life Cycle — the sequence of phases a piece of software goes through from idea to production). They don’t do it faster. They don’t skip steps. There are no steps. What exists is intent, context, and iteration.
The SDLC assumes development is a linear process with deliverables between phases. Requirements first. Then design. Then build. Then test. Then review. Then deploy. Each phase produces something the next one consumes.
When an agent receives a task, it doesn’t follow that sequence. It generates code, tests it, fixes it, and delivers it in a single cycle. The “phases” happen simultaneously within each iteration. There’s no separate “now we’re in design” moment distinct from “now we’re building.”
This isn’t a faster SDLC. It’s a different operating model.
What Happened to Each Phase
Not every SDLC phase collapsed the same way. Each one broke for a different reason.
Requirements: The Spec as a Byproduct
For decades, the first step was freezing requirements into a document. You wrote the PRD, got it approved, and that document governed the next several weeks.
That logic made sense when building a version took weeks. The cost of changing direction was high. Better to specify well before starting.
When you can generate ten versions of a feature in minutes, the equation changes. The spec stops being a prerequisite and becomes a byproduct. You try the idea, see the result, adjust your intent. The requirements document doesn’t disappear — but it’s no longer what starts the process. It’s what you document after iterating.
Design: Real-Time Collaboration With the Model
The model has seen more architectures than any individual engineer. Microservice patterns, modular monoliths, event sourcing, CQRS — the agent knows them not because it implemented them, but because it processed millions of repositories where they were implemented.
Design is no longer one person at a whiteboard. It’s a conversation with an agent that returns working code, not diagrams. The output of design isn’t an architecture document — it’s a prototype that runs.
That doesn’t mean the architect’s judgment is obsolete. It means their work shifted: from drawing the solution to evaluating and correcting what the agent proposes.
Testing: Built Into Generation
| Previous model | Agent model |
|---|---|
| Write code → send to QA → wait for results → fix | Agent generates code and tests in the same cycle |
| TDD as a methodology the team adopts (or doesn’t) | TDD as the tool’s default behavior |
| Test suite as a separate deliverable | Tests as part of each iteration’s output |
| Bugs found during testing phase | Bugs found during generation |
Agents write tests at the same time they write code. Not as a later phase. Not as a methodology the team decided to adopt. It’s how the tools work by default.
TDD stopped being an organizational decision. It’s simply what happens when you tell an agent “build this.”
Code Review: A Ritual That Doesn’t Scale
This is where it hurts most.
An agent like Uber’s Minion generates hundreds of PRs per day. Uber reports that 11% of their PRs are opened by an agent with no human author. Your team can review maybe 10 to 15 PRs per day with real attention.
The code review queue isn’t a bottleneck you can optimize. It’s a ritual designed for a world where each PR was the product of hours of human work. When a PR takes 90 seconds to generate, line-by-line review becomes impractical.
This doesn’t mean eliminating review. It means the control mechanism changes: from reviewing every line of code to monitoring system behavior in production.
The New Cycle: Intent, Build, Observe, Repeat
What replaces the SDLC isn’t chaos. It’s a shorter cycle with different controls.
Intent. Instead of a requirements document, you define what you need in natural language with relevant context. Context engineering — deciding what information, constraints, and examples the agent receives — is the skill that replaces sprint management. Better context leads directly to better output.
Build. The agent generates code, tests, and documentation in a single cycle. No handoffs between teams. No sprint waits. Iteration is continuous.
Observe. When manual code review doesn’t scale and requirements don’t freeze, what tells you the software works? What you see in production. Observability — logs, metrics, alerts, traces — becomes the last line of defense. It’s no longer “nice to have.” It’s the primary control.
Repeat. Each observation feeds the next intent. The cycle repeats, but each turn takes minutes, not weeks.
The Legitimate Enterprise Concern
Every week we hear some variation of the same question: “If there are no sprints, how do I estimate? How do I report to the board? How do I comply with regulations?”
The concern is legitimate. But it’s not new.
When companies migrated from waterfall to agile, the panic was similar. “How do I tell the client when it’ll be ready if I don’t have a Gantt chart?” Turns out the answer was measuring delivery velocity instead of predicting dates. The world didn’t end.
Now the transition is the same, one level deeper:
- Sprint velocity → replaced by context quality as the leading metric. If the context you give the agent is precise, the output is correct on the first iteration. If it’s ambiguous, you need three.
- Pre-approval governance (story points, estimates, approval before building) → replaced by post-deployment monitoring. Automated audits on generated code, continuous regression tests, production alerts.
- Story points and estimates → replaced by iteration count to correct result. If your team needs one iteration to resolve a ticket, the process works. If it needs seven, the problem is the context, not the agent.
Regulatory compliance doesn’t disappear. It relocates. Instead of verifying the team followed the right process before building, you verify the result meets the criteria after generating it. Compliance evidence comes from generation logs, automated test results, and production monitoring — not from planning meeting minutes.
Four Things You Can Do This Week
I’m not telling you to eliminate your sprints tomorrow. I’m telling you to evaluate whether your process reflects how your team actually works.
1. Measure your agent adoption rate. What percentage of the code your team ships each week was generated by an agent? If you don’t have that number, you don’t know whether your process still applies. An internal survey of five questions gives you a baseline in 48 hours.
2. Stop optimizing the code review queue. If you’re looking for ways to make your team review PRs faster, you’re solving the wrong problem. Redesign your review process to distinguish between human code and generated code. Apply different criteria to each. Invest in tools that filter useful signal from noise.
3. Invest in context engineering. Train your tech leads on how to structure context for agents: which files to include, which constraints to define, which examples of expected output to provide. The difference between an agent that produces garbage and one that produces production-ready code is usually context quality, not model quality.
4. Shift governance to post-deployment. If your control mechanism depends on approvals before building, start complementing it with monitoring after deploying. Regression alerts. Automated audits of generated code. Incident dashboards by source (human vs. agent). When your team generates more code than it can manually review, observability is what protects you.
If you don’t know what percentage of your code is agent-generated or how your governance process adapts to that reality, that’s exactly what we solve. At IQ Source we build an AI engineering maturity assessment in two weeks: we measure real adoption, evaluate your current controls, and deliver a concrete transition plan. Let’s talk.
Frequently Asked Questions
The traditional SDLC splits development into sequential phases: requirements, design, build, test, review, and deploy. In 2026, AI coding agents execute all those phases in a single continuous cycle. The process didn't speed up — it stopped applying, because agents don't need separate phases to produce functional code.
What replaces the sequential SDLC is a four-step cycle: intent (what you need), build (the agent generates), observe (production monitoring), and repeat. Context engineering — what information the agent receives — becomes the most important operational skill, and observability replaces code review as the primary control mechanism.
Governance shifts from pre-approval to post-observation. Instead of story points and sprints, companies measure context quality delivered to the agent, production incident rates, and observability coverage. Regulatory compliance is verified through automated audits on generated code before deployment, not through planning meetings.
Context engineering is the discipline of deciding what information, constraints, and examples an AI agent receives before generating code. It's the key new skill because the quality of generated code depends directly on context quality. An agent with good context produces correct code; the same agent without context produces code that compiles but doesn't solve the problem.
Related Articles
LiteLLM Attack: Your AI Trust Chain Just Broke
LiteLLM, the AI API key proxy with 97 million monthly downloads, was poisoned via PyPI. Your security scanner was the entry point.
Google Stitch + AI Studio: Design-to-Code Without Engineers
Google shipped a full design-to-production pipeline with Stitch and AI Studio. Where it works for B2B prototypes and where you still need real engineering.