The One-Shot Mirage: Three Voices, One Warning
Ricardo Argüello — April 26, 2026
CEO & Founder
General summary
Between April 23 and 24, three voices from three corners of the AI ecosystem published the same warning in under 24 hours. Ronan Berder said no one is actually 'running 20 agents overnight' and shipping software that real users keep using. Jon Yongfook called the one-shot a fugazi and pointed at the raw material a prompt cannot compress: ten years of edge cases his bootstrapped SaaS has fought in the wild. Chamath Palihapitiya named the financial pattern, called the trough of disillusionment, and said the only documented exit for negative-margin AI businesses is a fast sale. The convergence is the news, not the individual takes.
- Ronan Berder (765K views, April 23) said most accounts claiming to run 20 agents overnight are not actually shipping software for real users, and challenged anyone to livestream it
- Jon Yongfook ($79K MRR bootstrapped, 387K views) called the AI one-shot a fugazi and named what a prompt cannot capture: ten years of edge cases accumulated through real users
- Chamath Palihapitiya (184K views) named the financial pattern: one-shot grows fast but does not turn negative gross margins into a good business, and the only narrow exit is a timed sale like Windsurf or Cursor
- Three different incentive structures (a venture capitalist, a profitable bootstrapper, and an indie maker) converged on the same structural claim: edge cases, retention, support, trust, and distribution are accumulated capital, not prompt-generated capital
- Thirty-six years and five cycles of the same 'anyone can build it now' script: Visual Basic, Rails generators, WordPress, Bubble, vibe coding. The demo is always real. The business almost never is
Picture a TV chef who plates a perfect dish in 90 seconds on camera. The plate is real, the footage is not edited, and the showroom version looks flawless. Now ask that same chef to open a restaurant and serve that plate 200 times a night, across a 12-hour shift, with a supplier running late, a refrigerator breaking down, and an allergic guest who arrived without warning. The show plate and the restaurant plate are technically the same recipe. Operationally they are different problems. The AI one-shot is the show plate. A real software business is the restaurant.
AI-generated summary
Between Thursday, April 23, and Friday, April 24, three voices from three different corners of the AI ecosystem said almost the same thing in under 24 hours. This is not editorial coincidence. This is what happens when a pattern is already on the wall and somebody finally says it out loud.
Ronan Berder posted on April 23 at 7:48 a.m. that he is convinced most of the AI accounts in his timeline are full of it, and that nobody is actually “running 20 agents overnight” and shipping software that users keep around. The post crossed 765K views in under 48 hours. He closed with a self-deprecating PS: “it may also be that I have an IQ of 82 and can’t figure it out.” That close is what made the post stick. If you are going to call something a fugazi, it lands harder when you leave the door open to “maybe the problem is me.”
Jon Yongfook replied that same afternoon. Yongfook has been running Bannerbear, Clipcat, and Roborabbit as a bootstrapped SaaS for ten years and openly reports $79K in MRR. His line: “the idea of one shotting an app using AI is a fugazi. If you had to describe my app and all the edge cases I have solved over the years, it would be a prompt the size of a small book, and my app isn’t even that complicated.” 387K views.
Chamath Palihapitiya closed the pattern the next morning. “The hype cycle will soon fade, the trough of disillusionment will set in, and a lot of these magical promises will be undone as will the companies that have made them. AI one-shotting is a good way to grow fast but it doesn’t magically make the negative gross margins that come with it a good business idea. The narrow path to victory for these folks is if you can grow super fast and do a well-timed sale (Windsurf, Cursor).” 184K views.
Three different incentive structures. A venture capitalist with a public portfolio, a profitable bootstrapper, an indie maker honest enough to admit he might be the dumb one in the room. Same warning. That is the news.
What the three voices were actually saying together
Each of the three saw the same problem from a different angle.
Berder framed it as a problem of evidence. He uses AI all day long, writes most of his code through agents, and ships devpu.sh and basecoatui.com on top of agent workflows. He is not an ideological skeptic. His issue is that the AI claims with the highest reach in his feed are claims he cannot reproduce in his own work, and the people making them never livestream. In his own follow-up reply: “agents rarely run more than a few minutes at a time.” By his read, the issue is not that AI does not work; the dominant social-media format around AI is misrepresenting how serious operators actually use it.
For Yongfook the issue is raw material, not rhetoric. A production app, even an “uncomplicated” one, sits on top of a stack of decisions made in response to real users over years. The edge case of the email arriving with weird encoding. The customer in a country whose currency uses five decimals instead of two. The payment integration that changes its API without warning. The data-deletion clause that has to satisfy GDPR and Brazilian law at the same time, even when the two are not fully compatible. That stack does not describe itself in a prompt because it is not even fully formed in the founder’s head. It lives as reactive answers in the codebase, in the changelog, in the workarounds only the team remembers.
Chamath went to the financial side, which is where his job sits. Even if the app does get built overnight, the unit economics do not move just because the code generation got faster. Every inference costs money. Every token costs money. The gross margins of an AI product depend on model cost, which scales up with usage, not down. A user paying $20 a month and consuming $200 worth of compute is costing the vendor money every day they stay active. The only documented exit for that kind of business, Chamath argues, is selling it to a buyer with deep enough pockets to absorb the operating loss for years, the way Windsurf and Cursor did.
Stack the three angles together and the same structure shows up underneath. Edge cases, retention, support, real cost of inference, customer trust, distribution. Six layers. None of them fits inside a prompt. A reply in Yongfook’s thread said it more cleanly than I can: “you can one-shot the demo. you still have to pay for edge cases, retention, support, inference, trust, and distribution.” That sentence is the summary of the entire conversation.
Why the warning is landing exactly now
This is the third post this week on the same curve, looked at from three different altitudes.
The first was Monday, when GitHub, Anthropic, and xAI moved on price in the same week and the flat-rate era of AI ended. That post explained the supply side: the adoption subsidy is over because the real cost of agentic compute does not fit inside $20-a-month plans. The second was Saturday, the response to Jaya Gupta’s essay. There I separated memory from pattern recognition and argued that experience misused is a tax, but experience well used is the moat. Today’s post closes the loop. If the moat is pattern recognition across cycles, that moat shows up concretely as the six layers Yongfook and the others listed. Edge cases, retention, support, inference, trust, distribution.
That is why the Berder-Yongfook-Chamath convergence lands exactly now. The subsidy phase is closing on the vendor side. The judgment-as-advantage phase is opening on the operator side. The promise of the one-shot, which was the narrative bridge between the two, loses its support beam when both sides move at once.
Five cycles, thirty-six years, the same script
I have been doing this for thirty-six years. I started in 1990, age fifteen, programming on a Commodore 64 and a Texas Instruments. Since then I have watched the same script play out at least five times: a new tool ships, dramatically compresses the time from zero to first demo, and within six months a circle of courses, threads, and videos appears selling “build your software business in a weekend.”
The first one was Visual Basic, in the early 1990s. The promise was drag-and-drop components to build a Windows app in an afternoon. The demo was real. The number of people who actually built a Visual Basic business was a tiny fraction of the number of people who paid for Visual Basic books, conferences, and magazine subscriptions. What the demo did was move forward the moment a serious programmer could start. It did not eliminate the months that followed of learning data structures, error handling, and deployment.
The second was Rails, in the mid-2000s. rails generate scaffold produced a working CRUD app in sixty seconds. The video of the 37signals founder building Basecamp in fifteen minutes on stage went around the internet for years. Five years later, the businesses that survived were not the scaffold-generated ones. They were the ones that overwrote almost all of the scaffold answering to real users. Same script, different tool.
WordPress did the same move shortly after. “Anyone can have a website with WordPress.” True. The part the meme skipped is that running a production WordPress site safely against malicious plugins, optimizing the database when traffic grows, and upgrading PHP without breaking half the theme still requires a human with years of systems work behind them. The curve to learn WordPress shortened. The curve to operate a business on top of WordPress did not.
Bubble and the no-code wave of the 2010s were the fourth repetition. The fifth is happening right now with vibe coding and AI one-shot.
Each cycle compressed in time. The Visual Basic narrative lived around seven years. Rails lived five. WordPress as a “for everyone” frame lived three. Bubble made it to two. The AI one-shot is running on a nine-month cycle from peak to peak. The shape of the curve is identical every time. Real demo. Hype. Courses. A few real businesses built by people who already had pattern recognition before the tool. A majority that paid for the promise. Five years later, tools that are no longer a topic because they became part of the craft.
The distinction worth keeping
There is one thing worth separating, because if the warning gets read flat, it falls apart at the first counterexample on X. There is a real difference between agents running unsupervised and agent orchestration with humans in the loop.
Claire Vo replied to Berder’s thread with a useful note. She runs about ten Claude Code instances (“openclaws,” in her words) on different schedules doing tasks for her business, and the only one that runs overnight on a regular basis is one specialized in a narrow task. That is concrete practice, not a promise. Agents work when the operator has years of process design behind them: where to inspect, where to approve, where to let it run, where to cut. That is process engineering, not magic.
The fugazi Yongfook is calling out is not the idea of using agents. It is the idea of using agents without knowing what you are doing and then charging others to learn how to do the same. Those two are not the same thing, and putting Claire Vo on the same side as the grifter selling “build your unicorn overnight” courses would be both unfair and technically wrong.
The operating question is which side of that line the agent you are considering is sitting on. If it is in a low-consequence flow with non-critical data, no paying customers, and no external audit, a bad output just costs one more iteration, and one-shot is reasonable there. If it is in production touching customer data, payment integrations, signed contracts, or brand reputation, a bad output costs weeks of remediation and sometimes months of rebuilding trust. That zone does not tolerate one-shot by definition.
What this means for a software company
If your company sells software to other companies, this warning hits twice.
The first time as a buyer. Somebody on your team is probably already experimenting with one-shot generation to speed up internal prototypes. That is fine, as long as the output is treated as a prototype and not as production code. The cleanup tax we wrote about months ago still applies: code generated quickly without review is billable technical debt, not savings. The earlier the team learns the line between the two zones, the cheaper the cut.
And there is another debt less visible than the technical one, accumulating at the same speed in silence: judgment debt. Every time the team lets the agent decide something they would have argued through before (which validation to ask for, which edge to handle, which case actually deserves a test), that muscle atrophies a little. Technical debt gets paid next sprint. Judgment debt gets paid next cycle, when something new has to be decided and nobody in the room remembers how the conversation used to run. It is exactly the moat the previous post described: pattern recognition accumulated through years of arguing real decisions. If the agent makes all the decisions, that pattern recognition is not being formed.
The second time as a seller. If your product competes against a “this can be one-shotted” promise, the answer is not to drop price to match the promise, because the promise itself is going to deflate in six to nine months, and dropping price in response to a competitor that will disappear is a defensive move that only damages your own margin. The answer is to build the case for the incremental value you provide, get specific about which layers your product solves and which the one-shot promise does not, and let the market filter buyers who learned the hard way. Some of them will come back.
Chamath named the narrow exit on the AI vendor side. There is another exit he did not name, more relevant to a serious software business. Do not participate in the “build your app from a prompt” category at all. Build products that assume the buyer does understand the difference between demo and operations, and charge that buyer accordingly for the value of solving the six real layers.
What we do at IQ Source about this distinction
AI Maestro exists precisely because the average enterprise AI buyer is exposed to this kind of promise. When we walk into a company, part of the first map is identifying where the organization is using AI in safe territory (prototypes, internal tooling, fast validation) and where it is starting to use it in critical territory (production products, customer data, financial decisions). Both zones deserve AI. Each one deserves a different kind of AI and a different governance process. That separation, which an executive committee cannot draw alone because it sits inside the incentive to demonstrate fast adoption, is what we deliver in the first map.
Technology Partner, the other line, applies to software companies whose product lives in critical territory from day one. A product team can be excellent at pattern recognition for its market and at the same time choose not to become specialists in agent runtimes, in their own scaffolding layer, or in dynamic model pricing. For those companies, the right answer is not to hire three senior AI engineers to learn something that will recompress in twelve months. It is to buy the craft by the hour from a partner who lives there, who maintains a portable agent scaffolding, and who delivers production code the internal team can actually maintain when the craft stabilizes.
In both lines, the principle is the same. The one-shot has a place, but the place is small and well-defined. The rest of software work, especially the part with paying customers, still needs what it always needed: somebody who has already lived the edge cases, who knows where things break, and who is not going to sell the CEO the idea that the business gets built over the weekend.
If your next internal conversation includes the line “we should do this with AI, somebody said it can be one-shotted,” that is the conversation. Two hours with your team, written map at the end, a clean separation between the two zones. No quote attached. The email is the usual: info@iqsource.ai.
Berder cast doubt on himself before anyone else. Yongfook spoke from the comfort of a business that already pays its own bills, no funding rounds attached. And Chamath, the least comfortable of the three because he has money already inside the category, ended up calling the cycle on his own portfolio. Three angles. Same warning. When three different incentive structures land in the same place inside 24 hours, what is happening is not three people having the same opinion by chance. The pattern was already on the wall, and the press was running late.
Frequently Asked Questions
Jon Yongfook, the bootstrapped founder of Bannerbear, Clipcat, and Roborabbit at $79K MRR, posted on X on April 23, 2026 that AI one-shot is a fugazi (a scam, an illusion). His argument is that describing his own SaaS with all the edge cases accumulated over a decade would require a prompt the size of a small book, and that selling the idea of an overnight business through a single prompt is the same script that appears in every technology cycle.
The trough of disillusionment is the phase of the Gartner cycle where the promise of a technology stops justifying the valuations it received in the hype phase. Chamath Palihapitiya wrote on April 24, 2026 that AI is about to enter that phase and that the only reliable exit for companies running negative gross margins on AI one-shot is to grow fast and sell on time, citing Windsurf and Cursor as the documented playbook.
Edge cases accumulated over years, user retention, post-sale support, the real cost of inference at scale, customer trust earned over time, and distribution are the six layers Yongfook, Berder, Chamath, and other operators called out as not compressible into a single prompt. AI one-shot can produce the demo. A sustainable business requires that accumulated capital, which is built through years of iteration with real users.
At IQ Source we separate those zones during AI Maestro discovery. Internal flows with low consequences, throwaway tooling, and rapid validation prototypes do tolerate one-shot. Products with paying customers, sensitive data, third-party integrations, retention to maintain, or regulatory audit fall into Technology Partner, where the code that stays in production is written by someone who is also going to maintain it when the cycle moves again.
Related Articles
Adoption is not transformation: the post-McKinsey model
Raphaël Dabadie named the new model: software plus service. Traditional consulting runs sampling. AI transformation needs agents that map the whole organization.
The flat-rate AI era ended this week
GitHub paused Copilot Pro. Anthropic pulled Claude Code from the $20 plan. xAI parked Grok 4.3 behind a $300 tier. Three vendors, one week, same wall.