Skip to main content

The Ice Cracked. Which Side Is Your Company On?

Ramp data: top AI spenders doubled revenue since 2023. The difference isn't which model — it's how deeply AI lives inside operations.

The Ice Cracked. Which Side Is Your Company On?

Ricardo Argüello

Ricardo Argüello
Ricardo Argüello

CEO & Founder

Business Strategy 8 min read

Eric Glyman, CEO of Ramp, used a metaphor recently that I can’t stop thinking about.

Ernest Shackleton and 27 men were camped on Antarctic ice after losing their ship. One night, the ice cracked beneath them. The men who reacted quickly jumped to the safe side. Within minutes, the gap became too wide to cross.

Glyman’s point: the same split is happening right now across the American economy. Except this crack is quiet. It shows up in margins, in growth rates, in talent retention, in conversion and churn. Most companies on the wrong side don’t even see it yet.

I’ve been watching the data that supports this for months. And three completely independent sources — Ramp’s spending data, Princeton’s AI research, and Anthropic’s enterprise growth — all point to the same conclusion.

The gap is real. It’s widening. And the differentiator isn’t which AI model you picked.

The data from 50,000 businesses

Ramp’s Economics Lab tracks spending across more than 50,000 businesses. Since 2023, the top quartile of AI spenders on the platform more than doubled their revenue. The bottom quartile? Essentially flat.

And these aren’t just tech companies.

Take a roofing company in Texas. After implementing AI for estimates and job documentation, revenue jumped 24%. In Utah, a window installer streamlined proposals and quotes with AI tools for over a year — revenue grew 59%. Even established players are seeing massive gains: a five-person construction firm in Florida, already pulling in over $20M, grew 65% last year by using LLMs to handle contract drafting that used to eat half their week.

Roofers. Window installers. Construction firms. Not startups in San Francisco with $50M in venture funding.

The spread has grown every year, accelerating each time. And Ramp’s data skews toward fast-growing companies and early adopters — the rest of the economy likely looks even starker.

Here’s the part that should concern anyone feeling comfortable: revenue is a lagging indicator. By the time it moves, the market has already moved. Revenue slowdown isn’t even where it ends. It ends in shrinking margins, talent attrition, and eventual irrelevance.

It’s not about which model you pick

This is where most companies get it wrong. They spend weeks evaluating GPT vs. Claude vs. Gemini, running benchmark comparisons, debating pricing tiers. And they miss the actual lever.

Researchers at Princeton published a paper that tested the same AI model — identical capabilities, same training — with different surrounding setups. One setup had a well-designed system around the model: the right tools connected, information delivered in the right format, feedback loops that caught mistakes early. The other was a raw, default setup.

The result: 64% better performance. Same model. The only variable was the environment.

Think of it this way. Two restaurants with the same oven. In one kitchen, ingredients are organized, recipes are at hand, prep stations are clean, and a restocking system keeps everything flowing. In the other, ingredients are scattered across three rooms, there’s no recipe, and the chef has to go find things mid-service. Same oven. Same chef. Completely different meals.

That’s what’s happening with AI in business. The companies getting real ROI aren’t the ones who picked the “best” model. They’re the ones who connected AI to their actual data, their actual workflows, and their actual decision points. The model is what thinks. The system around it is what determines what it thinks about.

OpenAI proved this operationally when a team of three engineers shipped over a million lines of production code using AI agents. The bottleneck was never the model’s capability. It was always the design of the environment around it — the tools, the feedback loops, the data connections. Anthropic proved the same thing building Claude Code. The performance scaled with the quality of the surrounding system, not the model.

For your company, this means the debate about which AI provider to use is roughly 20% of the decision that matters. The other 80% is how deeply you wire AI into the way your business actually operates.

Why Claude went from 4% to 40% enterprise share

If you want to see this principle play out at market scale, look at what happened to Anthropic’s Claude in the enterprise.

In roughly one year, Claude went from about 4% to about 40% enterprise market share. Over 500 customers now spend more than $1M annually with Anthropic. That kind of growth doesn’t come from having slightly better benchmarks.

It came from integration.

Claude doesn’t sit alongside enterprise tools as a separate tab. Through the Model Context Protocol (MCP), Claude lives inside the tools people already use. It writes code in your IDE. It runs workflows through your existing systems. It connects to your databases, your documents, your internal APIs.

That’s a fundamentally different product than “go to this website and type your question.”

And it creates a fundamentally different competitive dynamic. Consumer AI is a brand war — today’s #1 app is tomorrow’s #2, and users switch with zero friction. Enterprise AI is workflow integration that compounds over time. Once AI is embedded in how your team actually works — in your procurement flows, your reporting pipelines, your customer service operations — the switching cost isn’t a subscription fee. It’s rewiring your entire operation.

Models compete. Workflows compound. That’s why 95% of AI’s potential remains untapped — most companies bought the model but never built the integration layer.

Kodak invented the camera that killed it

Glyman made another comparison that stays with me: Kodak.

Kodak invented digital photography in 1975. They had the data. They had the technology. They could see the trend. They still went bankrupt in 2012.

The problem wasn’t that they didn’t know. It’s that they didn’t restructure their operations around what they knew. They kept doing what was comfortable because revenue was still good. Until it wasn’t.

Today, the data is public. Ramp published it. Princeton published the research. Anthropic’s market share speaks for itself. The question isn’t whether AI integration drives revenue — we’re past that debate. The question is whether your company acts on it or keeps trusting the current numbers while the ground shifts.

Revenue looking good today does not mean safe. Your current AI investments may already be depreciating — and the gap between companies that integrated AI deeply and those that treated it as a side tool is getting wider every quarter.

Freezing doesn’t feel dangerous. Glyman nailed that. It feels calm. People take off their coats. And by the time they notice, the ice has drifted too far apart to jump.

I didn’t always jump

I’m writing this with the honesty of someone who’s been on both sides.

Throughout my career, there were moments when the ice cracked and I didn’t jump. I stayed on the comfortable side, convinced the numbers still looked fine, that there was time. There wasn’t. And the cost of not jumping was real — opportunities I never recovered, advantages others built while I waited.

But there were also times when I jumped before everyone else. When the discomfort of moving early became the advantage precisely because no one else had moved yet.

Glyman’s metaphor is powerful, but incomplete. Jumping is only half of it. The other half is what you do when you land on the other side. Because the safe ice isn’t permanent either — eventually it cracks again, and you need to be ready to jump once more.

That’s what I learned: it’s not about one heroic leap. It’s about building the capacity to jump every time the ice shifts.

What we do about this at IQ Source

We design the integration layer.

Not model selection — we’re model-agnostic. What matters is how AI connects to your data, your workflows, and your decision points. That’s where the 64% improvement lives. That’s what turned a five-person construction firm into a $33M operation. That’s what took Claude from 4% to 40%.

In our experience working with B2B companies across Latin America, most failed AI implementations had a capable model and no integration strategy. The AI was powerful. It just wasn’t connected to anything that mattered.

We connect it.

If you suspect your AI is sitting beside your operations instead of inside them, send us a one-paragraph description of your current setup. We’ll return a quick integration diagnostic — where the gaps are between your AI and your workflows, and what connecting them would look like.

Send my AI setup for diagnostic

Frequently Asked Questions

AI adoption enterprise AI strategy AI integration Anthropic Claude MCP AI revenue impact workflow automation

Related Articles

Finance AI: why LLMs still hallucinate in production
Business Strategy
· 7 min read

Finance AI: why LLMs still hallucinate in production

OpenAI formally proved in 2025 that LLM hallucinations are mathematically inevitable. Here's what that means for building finance AI that CFOs will sign.

AI governance AI architecture finance AI
Your AI Wants to Touch Payroll. Kubernetes Knows How.
Business Strategy
· 7 min read

Your AI Wants to Touch Payroll. Kubernetes Knows How.

The engineer who built Azure Kubernetes Service is now Workday's CTO. It's not a hire — it's an architecture signal: container governance is the playbook for AI agents.

AI agents Kubernetes governance