Your AI Investment Needs a Learning Curve Strategy
Ricardo Argüello — March 25, 2026
CEO & Founder
General summary
Anthropic's second Economic Index reveals that experienced AI users don't delegate more — they iterate more. With a 73.1% task success rate vs 66.7% for newcomers, the gap isn't about which model you buy, but how your team learns to work with it.
- Users with 6+ months of experience achieve 73.1% task success vs 66.7% for new users
- Experts iterate more (28.2%) and delegate blindly less (29.4% directive vs 38.1% for newcomers)
- Personal use (weather, sports, basic queries) rose from 35% to 42%, driven by newer users
- AI is already used for tasks paying $47.9/hour on average — above the US mean wage
- Companies investing in iteration protocols get compounding returns, not linear ones
Think about learning to drive. The first few months you follow GPS directions to the letter, even when it sends you the long way around. With experience, you learn when to trust it, when to ignore it, and when to take shortcuts only you know. The same thing happens with AI: beginners accept the first answer. Veterans know when to ask for something better.
AI-generated summary
Yesterday Anthropic published Learning Curves, the second edition of their Economic Index. In January I wrote about the 95% AI utilization gap revealed in the first report. That number told you how big the problem was. This new report tells you how to close it.
And the answer isn’t buying a better model.
Anthropic analyzed one million real conversations with Claude between February 5 and 12, 2026. Not surveys, not analyst projections — actual usage data. The actual usage data shows that AI progress doesn’t depend on more powerful models. It depends on something far less glamorous: people learning to use them well.
What they measured and why it matters more this time
The first Economic Index answered a descriptive question: what is AI being used for? The answer was that it’s used for far less than it can do — that 95% gap.
Learning Curves goes deeper. It classifies each conversation into five interaction types (directive, feedback loop, task iteration, validation, and learning), analyzes how patterns shift with user experience, and measures whether more experienced users actually get better results.
The sample includes data from Claude.ai (the web app) and from the first-party API, allowing comparison between individual users and teams integrating AI into automated workflows.
The central finding: user experience matters as much as model capability.
73% vs 67%: the gap you can’t buy
A few weeks ago a client told me he uses AI to check the time in Australia and the weather in Japan. I told him Google handles that better — language models like ChatGPT or Claude don’t have access to real-time data unless connected to search. It’s like using a microscope to read the newspaper: it works, but you’re wasting capacity.
That pattern isn’t anecdotal. Anthropic measured it: personal use (sports, weather, product comparisons) rose from 35% to 42% of Claude.ai conversations, driven by newer users. Meanwhile, work-related use held steady at ~45%.
But when you look at users with more than six months of experience, the picture changes. They do 7 percentage points more work-related tasks than newcomers, and 4 points fewer personal queries. It’s not that they don’t use AI for personal things — they learned what it’s actually good for.
And the results show it:
- Task success rate: 73.1% for experienced users, 66.7% for newcomers (+6.4 percentage points)
- Task iteration: 28.2% of veteran conversations include active iteration, vs 24.5% for newcomers
- Directive mode: Newcomers give direct instructions without review 38.1% of the time. Experienced users, only 29.4%
That 6.4-point gap in success rate could be explained if veterans simply tackle easier tasks. But Anthropic controlled for that. Even comparing the same task type, in the same country, with the same model, experienced users still have ~4 percentage points more success.
They didn’t buy a better model. They learned to use the one they had.
Disciplined iteration, not more tools
There’s one finding in the report I didn’t expect: the most experienced users don’t automate more. They iterate more.
The popular narrative says the natural evolution of AI usage is moving from asking questions to letting it do things on its own. That full autonomy is the end goal. Anthropic shows the opposite. Users who’ve been on Claude the longest are more likely to validate results (+1.3 points), iterate on responses (+3.6 points), and less likely to give directive instructions without review (-8.7 points).
In our experience at IQ Source, this tracks. The clients getting the best results from AI aren’t the ones letting it run unsupervised — they’re the ones who developed a working rhythm with it. Ask, review, adjust, ask again. It’s slower than delegating and forgetting, but the results are consistently better.
What people do with AI also diversified. The top 10 tasks now represent 19% of Claude.ai conversations, down from 24% in November 2025. AI is penetrating more types of work, not just concentrating on coding and writing.
And the ratio between augmenting and automating still favors collaboration: 53% of Claude.ai conversations are augmentation (human works with AI), vs 44% automation (AI executes alone). The API trend differs — but the API serves programmatic workflows, not individual professionals.
$47.9 per task and who pays for which model
A number that should get any business leader’s attention: the average value of tasks on Claude.ai is $47.9 per hour, measured as the average wage a human worker earns doing that same task. On the API, it’s $50.7. The US average wage is $37.3.
AI isn’t being used for cheap work. It’s being used for expensive work.
And users are being surprisingly rational about model selection. Anthropic offers three tiers: Haiku (fast and economical), Sonnet (balanced), and Opus (most capable). The data shows users choose Opus 4.4 percentage points more than expected for computing and math tasks, and 6.5 points less for educational tasks.
The correlation is clear: for every $10 increase in a task’s hourly wage, the proportion of Opus usage rises 1.5 points on Claude.ai and 2.8 points on the API. Software developers use Opus 34% of the time; tutors, only 12%.
We wrote about how enterprise AI economics changed in 2026 with the price reductions. Learning Curves adds another dimension: it’s not about always buying the most expensive model. It’s about your team knowing when to use each tier. That’s a learned skill, not a budget decision.
Augment before you automate (still)
There’s one finding in the report that deserves special attention for B2B companies: sales and outreach automation doubled in the API between November and February. Lead qualification, customer data enrichment, prospecting emails — all growing rapidly.
But before you rush to automate your sales pipeline, look at the full pattern. The most successful users didn’t start by automating. They started by augmenting: using AI as a copilot to understand what works, iterating on messages, refining qualification criteria. Only after building that experience did they automate the patterns they’d already mastered.
We’ve written about when AI isn’t the right answer. Learning Curves confirms that thesis from another angle: even the most experienced users prefer AI as copilot, not replacement. Automation works when you already know exactly what to automate. Jumping into automation without understanding the process usually just means you’ll scale your errors faster.
How we build learning curves with our clients
At IQ Source we work with a three-phase approach:
Phase 1: Usage pattern diagnosis. Before changing anything, we map how the team uses AI today. Are they in directive mode (give instructions and accept the first response)? Do they iterate? What types of tasks are they using AI for — basic queries or high-value work? This initial diagnosis tells us where they are on the curve.
Phase 2: Iteration protocols by role. Strategies vary by role — sales focuses iterations on qualifying leads and personalizing messages, operations on detecting patterns in production data, legal on contract review and compliance. What they share: never accept the first response.
Phase 3: Model selection by complexity. We teach teams to use the right model for each task. You don’t need Opus to translate an email. You shouldn’t use Haiku to analyze a $500K contract. This calibration saves costs and improves results — exactly the pattern Anthropic documented with data from one million conversations.
Anthropic’s data validates what we see with clients every week: companies that invest in building a learning curve with AI get compounding returns. The team naturally improves over time, learns from each interaction, and stops making the same mistakes.
Buying licenses without training the team doesn’t create that effect. It just creates the illusion of progress — while your team keeps using AI to check the weather.
The difference between 67% and 73% task success isn’t which model you buy — it’s how much you invest in your team learning to work with it. If you want to know where your team is on that curve, we run a 90-minute AI usage pattern diagnostic: how they interact with models, where they accept the first answer without iterating, and which work protocols can change immediately.
Schedule the usage pattern diagnostic →Frequently Asked Questions
According to Anthropic's Learning Curves report (March 2026), users with 6+ months of experience achieve a 73.1% task success rate compared to 66.7% for newcomers. This gap persists even after controlling for task type, country, and model selection — suggesting experience itself drives better outcomes.
Structured iteration means reviewing, questioning, and refining AI responses instead of accepting the first output. According to Anthropic, 28.2% of experienced user conversations include active task iteration, and these users are less likely to give directive instructions without review.
According to the March 2026 Anthropic Economic Index, the average task value on Claude.ai is $47.9/hour, and $50.7 via API. Both figures exceed the US average wage of $37.3/hour, indicating AI is being used for high-value knowledge work, not just routine tasks.
Augmenting means using AI as a copilot that complements the human (53% of Claude.ai usage). Automating means AI executes independently (44%). For B2B companies, the most effective path is to augment first to build experience, then automate the tasks your team has already mastered with AI.
Three steps: first, diagnose current AI usage patterns across your team. Second, implement structured iteration protocols by role. Third, establish model selection criteria based on task complexity. Companies that invest in this learning curve see compounding returns over time.
Related Articles
AI Killed Execution. The Bottleneck Is Now You.
Simon Willison is wiped out by 11am directing agents. Andreessen says execution is dead. The bottleneck your company faces just moved.
Mercor Breach: 4 TB of Biometric Data You Can't Rotate
Mercor, the $10B AI startup training models for OpenAI and Anthropic, fell to the LiteLLM supply chain attack. Lapsus$ claims video interviews, face scans, and passports from 30,000+ contractors.