We're AI Consultants. Sometimes We Say: Don't Use AI
Ricardo Argüello — March 24, 2026
CEO & Founder
General summary
42% of companies scrapped most of their AI initiatives in 2025. Many of those projects should never have started. This post lays out five concrete scenarios where AI is the wrong tool and what to use instead — from the perspective of a consultancy that makes its living implementing AI but earns more trust when it says no.
- S&P Global reports that 42% of companies scrapped most of their AI initiatives in 2025 — double the rate from the previous year
- When a process can be described as if/then rules with 95%+ accuracy, a rules engine costs a fraction and is more reliable than any AI model
- BCG found that 70% of AI project success depends on people and processes, not technology — automating a broken process just breaks it faster
- The alternatives table at the end maps each scenario to the simplest solution: rules engine, classical ML, workflow automation, or process redesign
- At IQ Source, roughly 4 out of 10 engagements end with a recommendation to skip AI — saying no when it's the right call is what builds long-term trust
Imagine taking your car to a mechanic who tells you that you don't need a new engine — you need an oil change. You might be skeptical because they sell engines. But the next time your engine actually fails, who are you going to trust? The mechanic who sold you an engine you didn't need, or the one who saved you money by being honest? That's what we do with AI: sometimes the expensive, complex solution isn't the right one, and saying so is the best thing we can do for the client.
AI-generated summary
42% of companies scrapped most of their AI initiatives in 2025. Double the rate from the year before.
Many of those projects should never have started.
In 25 years of building enterprise software, I’ve watched this movie before. Cloud was going to replace everything. Mobile was going to replace everything. The companies that won each cycle weren’t the earliest adopters — they were the ones who knew when the new technology was the wrong tool.
The same thing is happening with AI. And I’m saying this as someone who makes a living implementing it.
At IQ Source, roughly 4 out of every 10 times a client comes to us wanting AI, we tell them they don’t need it. That’s not bad for business. It’s the best thing we do for business.
Five questions before we start
We don’t have a proprietary methodology or a 12-step process. We have five questions we ask in the first meeting with every client. If any of them raises a flag, the project changes direction — or doesn’t start at all.
- Is the process based on fixed rules? If so, a traditional engine will always be cheaper and more predictable.
- The audit trail. If you’re in a regulated sector, you need to explain every decision. A black box won’t survive a compliance review.
- Data quality. Feeding dirty data into a model just teaches it patterns that don’t exist.
- Do the numbers add up? You need to weigh whether the cost of each API call makes sense against the actual value that decision generates.
- Internal order. If the current process doesn’t work well at the organizational level, AI will just automate the chaos.
The most common case we see: someone wants an AI chatbot to classify internal support tickets. But when we look at the process, they have a handful of categories, clear rules for each, and a team already classifying them correctly over 95% of the time.
That problem isn’t about intelligence — it’s plain automation. A rules engine with automated routing takes two weeks to build, costs a fraction of an AI project, and hits 100% accuracy within the defined rules. No API latency, no per-transaction cost, no hallucinations.
When you tell a client that instead of selling them the more expensive project, the trust you build is worth more than any single contract.
If you can write it as if/then, you don’t need a model
This is the most common scenario we see. A client wants to use AI for something that can be solved with deterministic logic.
Tax calculations. Regulatory compliance checks. Ticket routing. Product classification by inventory rules. Form validation.
The difference in numbers is clear. A rules engine responds in milliseconds at a marginal cost of zero, with 100% accuracy within the parameters you defined. You know exactly what happened in every case. An AI model takes 200 to 2,000 milliseconds, charges you $0.002 to $0.05 per API call, and typically lands between 85-95% accuracy. And you lose most of the explainability.
Models are only worth the cost when you’re dealing with ambiguity — interpreting user intent, handling unstructured text, classifying inputs that don’t fit neat categories. If 95% of your cases follow clear rules, forcing AI into the mix just makes your system expensive and unpredictable.
Back to the support ticket example: if your categories cover 95%+ of volume, the remainder goes to a manual queue — which is exactly what an AI model would do when its classification confidence is too low. Same outcome, without the cost.
98% accuracy with no explanation is worse than 95% with an audit trail
A bank denies your loan. You ask why. The answer you get: “the model determined that your risk profile is elevated.”
Is that acceptable? In many jurisdictions, no. And in more of them every year.
In highly regulated sectors, accuracy alone doesn’t cut it. You need to explain your math. A model that’s right 98% of the time but works as a black box can be worse than a rules-based system that’s right 95% and produces an auditable decision chain.
The data backs this up. Gartner predicted that over 30% of generative AI projects would be abandoned after proof of concept, and one of the primary factors is the inability to meet risk controls and governance requirements.
When a client in financial services or healthcare asks us to implement AI for decision-making, the first conversation isn’t about models or data. It’s about what happens when a regulator asks them to explain a decision. If there’s no clear answer, we step back to a rules-based system — one that can show exactly why every decision was made.
Your data problem is not an AI problem
Clients often come to us wanting to predict demand with AI, but when we dig into their systems we find they have six months of history for just 40 products with recurring sales out of a catalog of 300. At those volumes, an AI model won’t find patterns — it’ll invent them. That’s not an AI problem. It’s a data problem. And AI doesn’t solve it — it amplifies it.
| Data situation | Recommended approach |
|---|---|
| Thousands of clean records, clear patterns | Classical ML (regression, decision trees) works well and is interpretable |
| Hundreds of records, some signals | Heuristics based on business rules + moving averages |
| Dirty or inconsistent data | Data cleaning and standardization first, AI later |
| Few records, many variables | The model will memorize noise, not learn patterns |
BCG found that only 25% of companies realize significant value from their AI initiatives — and just 5% achieve it at scale. One of the primary reasons: 63% of organizations don’t have the data management practices needed for AI.
If you don’t have enough data, a classical ML model with 10 well-chosen variables will outperform an LLM that has nothing to work with. And it will cost a fraction of the price.
When the math doesn’t work: latency and per-transaction cost
This is the easiest scenario to evaluate because it’s pure arithmetic.
Say you have an e-commerce platform processing 50,000 transactions per day. You want to use AI for real-time fraud detection. The model needs to evaluate every transaction before approving it.
The math:
- Cost per API call to a language model: ~$0.01 (conservative)
- 50,000 transactions/day x $0.01 = $500/day = $15,000/month
- Average value of fraud prevented (assuming 0.1% fraudulent transactions at $80 average ticket): ~$4,000/month
The model costs nearly 4x what it saves. And that’s before accounting for latency: each API call adds 200-500ms. In a checkout flow, that’s the difference between a completed purchase and an abandoned cart.
The alternative? A rules-based system that flags suspicious transactions (unusual amount, different-country IP, three failed attempts) and only sends those — say 2% — to a more sophisticated model. Cost: $300/month instead of $15,000. Latency: <10ms for 98% of transactions.
Even though AI costs have dropped ~80% in 18 months, the per-transaction arithmetic still kills projects where volume is high and per-unit value is low. Run the numbers before you start.
No amount of AI fixes a broken process
At Word Magic, the translation software company my father and I founded in the 90s, we spent months trying to automate online order receiving and processing. We built a custom internal system — I told that story in another post. But the problem wasn’t the system. It was that the fulfilment process wasn’t standardized: information came from different channels in different formats, validation rules varied depending on who processed the order, and there was no single agreed-upon workflow across teams.
No technology was going to solve that. We had to sit down, define a clear and consistent process, and only then did the automation actually work.
That pattern shows up in roughly 70% of the AI consultations I do today. The client describes a classification, prediction, or analysis problem. We investigate. And we discover that the real issue is that there’s no clear process, or three teams operate with different definitions, or data isn’t collected consistently.
BCG quantifies this with their 10-20-70 rule: AI transformation success depends 10% on algorithms, 20% on data and technology, and 70% on people, processes, and organizational change. McKinsey confirms that only 39% of organizations report EBIT impact from their AI investments — largely because they don’t redesign workflows.
Automating a broken process doesn’t fix it. It breaks it faster.
We’ve written about the questions AI doesn’t ask. This is the most important one: does the process you want to automate work well without technology? If not, don’t buy a model. Fix the process.
What to use instead
| Scenario | Simpler alternative | Approximate cost | Implementation time |
|---|---|---|---|
| Process with clear, fixed rules | Rules engine (custom code or low-code tools) | $2K-$10K | 1-3 weeks |
| Prediction with limited structured data | Classical ML (regression, random forest) | $5K-$15K | 2-4 weeks |
| Connecting systems without complex logic | Workflow automation (n8n, Make, Zapier) | $1K-$5K | 1-2 weeks |
| Disorganized process with inconsistent definitions | Process redesign + documentation | $3K-$8K | 2-4 weeks |
| Anomaly detection at high volume | Rules to filter + model only for ambiguous cases | $5K-$12K | 3-5 weeks |
These alternatives aren’t exciting. They won’t generate a stunning slide for the board. But they solve the problem at a fraction of the cost, with less risk, and in less time.
When AI IS the right answer
I don’t want this post to read as anti-AI. We make our living on it. But AI wins when the problem involves ambiguous natural language, pattern recognition in unstructured data, personalization at scale, or reasoning over constantly changing contexts.
If your problem fits there, we’ve written about how to close the utilization gap and about how to deploy AI agents in real operations. That’s the other half of our work — and the part that excites us most.
But that work delivers better results when we first filter out the projects that shouldn’t have been AI in the first place.
I’d rather tell you today that you don’t need AI and earn your trust, than sell you a project that ends up in the 42% graveyard. If you’re not sure whether your next project needs AI or something simpler, that’s exactly the conversation we have in our AI maturity assessment.
Assess whether my project needs AIFrequently Asked Questions
Companies should avoid AI when the process can be solved with deterministic rules (if/then logic), when regulations require explaining every decision and a model can't do that, when data is insufficient or dirty, when per-transaction API costs exceed the value generated, or when the real problem is organizational rather than technical.
According to S&P Global, 42% of companies scrapped most of their AI initiatives in 2025, double the previous year. BCG found that only 25% of companies realize significant value from AI, and just 5% achieve it at scale. Gartner predicted that over 30% of generative AI projects would be abandoned after proof of concept.
Alternatives depend on the scenario: rules engines for deterministic processes, classical ML models (regression, decision trees) when data is structured but limited, workflow automation tools (Zapier, Make, n8n) for connecting systems, and process redesign when the problem is organizational. These options cost 70% to 95% less than an AI project.
At IQ Source we evaluate five questions before recommending AI: Can the process be described with fixed rules? Does regulation require explaining every decision? Is there enough clean data? Does the per-transaction cost math work? Does the current process work well organizationally? If any answer suggests AI won't add value, we recommend the simpler alternative.
Related Articles
The 100x Employee Already Exists (And Changes How You Hire)
One AI-literate professional now produces what used to take a team. Jensen Huang confirmed it at GTC 2026. Here's what it means for your hiring strategy.
Anthropic Uses Salesforce. Why Don't You?
The most advanced AI companies buy SaaS instead of building it. A framework for deciding when to build and when to buy.