Skip to main content

We're AI Consultants. Sometimes We Say: Don't Use AI

An AI consultancy telling clients 'skip the AI' sounds contradictory. But it's the most valuable thing we do.

We're AI Consultants. Sometimes We Say: Don't Use AI

Ricardo Argüello

Ricardo Argüello
Ricardo Argüello

CEO & Founder

Business Strategy 9 min read

42% of companies scrapped most of their AI initiatives in 2025. Double the rate from the year before.

Many of those projects should never have started.

In 25 years of building enterprise software, I’ve watched this movie before. Cloud was going to replace everything. Mobile was going to replace everything. The companies that won each cycle weren’t the earliest adopters — they were the ones who knew when the new technology was the wrong tool.

The same thing is happening with AI. And I’m saying this as someone who makes a living implementing it.

At IQ Source, roughly 4 out of every 10 times a client comes to us wanting AI, we tell them they don’t need it. That’s not bad for business. It’s the best thing we do for business.

Five questions before we start

We don’t have a proprietary methodology or a 12-step process. We have five questions we ask in the first meeting with every client. If any of them raises a flag, the project changes direction — or doesn’t start at all.

  1. Is the process based on fixed rules? If so, a traditional engine will always be cheaper and more predictable.
  2. The audit trail. If you’re in a regulated sector, you need to explain every decision. A black box won’t survive a compliance review.
  3. Data quality. Feeding dirty data into a model just teaches it patterns that don’t exist.
  4. Do the numbers add up? You need to weigh whether the cost of each API call makes sense against the actual value that decision generates.
  5. Internal order. If the current process doesn’t work well at the organizational level, AI will just automate the chaos.

The most common case we see: someone wants an AI chatbot to classify internal support tickets. But when we look at the process, they have a handful of categories, clear rules for each, and a team already classifying them correctly over 95% of the time.

That problem isn’t about intelligence — it’s plain automation. A rules engine with automated routing takes two weeks to build, costs a fraction of an AI project, and hits 100% accuracy within the defined rules. No API latency, no per-transaction cost, no hallucinations.

When you tell a client that instead of selling them the more expensive project, the trust you build is worth more than any single contract.

If you can write it as if/then, you don’t need a model

This is the most common scenario we see. A client wants to use AI for something that can be solved with deterministic logic.

Tax calculations. Regulatory compliance checks. Ticket routing. Product classification by inventory rules. Form validation.

The difference in numbers is clear. A rules engine responds in milliseconds at a marginal cost of zero, with 100% accuracy within the parameters you defined. You know exactly what happened in every case. An AI model takes 200 to 2,000 milliseconds, charges you $0.002 to $0.05 per API call, and typically lands between 85-95% accuracy. And you lose most of the explainability.

Models are only worth the cost when you’re dealing with ambiguity — interpreting user intent, handling unstructured text, classifying inputs that don’t fit neat categories. If 95% of your cases follow clear rules, forcing AI into the mix just makes your system expensive and unpredictable.

Back to the support ticket example: if your categories cover 95%+ of volume, the remainder goes to a manual queue — which is exactly what an AI model would do when its classification confidence is too low. Same outcome, without the cost.

98% accuracy with no explanation is worse than 95% with an audit trail

A bank denies your loan. You ask why. The answer you get: “the model determined that your risk profile is elevated.”

Is that acceptable? In many jurisdictions, no. And in more of them every year.

In highly regulated sectors, accuracy alone doesn’t cut it. You need to explain your math. A model that’s right 98% of the time but works as a black box can be worse than a rules-based system that’s right 95% and produces an auditable decision chain.

The data backs this up. Gartner predicted that over 30% of generative AI projects would be abandoned after proof of concept, and one of the primary factors is the inability to meet risk controls and governance requirements.

When a client in financial services or healthcare asks us to implement AI for decision-making, the first conversation isn’t about models or data. It’s about what happens when a regulator asks them to explain a decision. If there’s no clear answer, we step back to a rules-based system — one that can show exactly why every decision was made.

Your data problem is not an AI problem

Clients often come to us wanting to predict demand with AI, but when we dig into their systems we find they have six months of history for just 40 products with recurring sales out of a catalog of 300. At those volumes, an AI model won’t find patterns — it’ll invent them. That’s not an AI problem. It’s a data problem. And AI doesn’t solve it — it amplifies it.

Data situationRecommended approach
Thousands of clean records, clear patternsClassical ML (regression, decision trees) works well and is interpretable
Hundreds of records, some signalsHeuristics based on business rules + moving averages
Dirty or inconsistent dataData cleaning and standardization first, AI later
Few records, many variablesThe model will memorize noise, not learn patterns

BCG found that only 25% of companies realize significant value from their AI initiatives — and just 5% achieve it at scale. One of the primary reasons: 63% of organizations don’t have the data management practices needed for AI.

If you don’t have enough data, a classical ML model with 10 well-chosen variables will outperform an LLM that has nothing to work with. And it will cost a fraction of the price.

When the math doesn’t work: latency and per-transaction cost

This is the easiest scenario to evaluate because it’s pure arithmetic.

Say you have an e-commerce platform processing 50,000 transactions per day. You want to use AI for real-time fraud detection. The model needs to evaluate every transaction before approving it.

The math:

  • Cost per API call to a language model: ~$0.01 (conservative)
  • 50,000 transactions/day x $0.01 = $500/day = $15,000/month
  • Average value of fraud prevented (assuming 0.1% fraudulent transactions at $80 average ticket): ~$4,000/month

The model costs nearly 4x what it saves. And that’s before accounting for latency: each API call adds 200-500ms. In a checkout flow, that’s the difference between a completed purchase and an abandoned cart.

The alternative? A rules-based system that flags suspicious transactions (unusual amount, different-country IP, three failed attempts) and only sends those — say 2% — to a more sophisticated model. Cost: $300/month instead of $15,000. Latency: <10ms for 98% of transactions.

Even though AI costs have dropped ~80% in 18 months, the per-transaction arithmetic still kills projects where volume is high and per-unit value is low. Run the numbers before you start.

No amount of AI fixes a broken process

At Word Magic, the translation software company my father and I founded in the 90s, we spent months trying to automate online order receiving and processing. We built a custom internal system — I told that story in another post. But the problem wasn’t the system. It was that the fulfilment process wasn’t standardized: information came from different channels in different formats, validation rules varied depending on who processed the order, and there was no single agreed-upon workflow across teams.

No technology was going to solve that. We had to sit down, define a clear and consistent process, and only then did the automation actually work.

That pattern shows up in roughly 70% of the AI consultations I do today. The client describes a classification, prediction, or analysis problem. We investigate. And we discover that the real issue is that there’s no clear process, or three teams operate with different definitions, or data isn’t collected consistently.

BCG quantifies this with their 10-20-70 rule: AI transformation success depends 10% on algorithms, 20% on data and technology, and 70% on people, processes, and organizational change. McKinsey confirms that only 39% of organizations report EBIT impact from their AI investments — largely because they don’t redesign workflows.

Automating a broken process doesn’t fix it. It breaks it faster.

We’ve written about the questions AI doesn’t ask. This is the most important one: does the process you want to automate work well without technology? If not, don’t buy a model. Fix the process.

What to use instead

ScenarioSimpler alternativeApproximate costImplementation time
Process with clear, fixed rulesRules engine (custom code or low-code tools)$2K-$10K1-3 weeks
Prediction with limited structured dataClassical ML (regression, random forest)$5K-$15K2-4 weeks
Connecting systems without complex logicWorkflow automation (n8n, Make, Zapier)$1K-$5K1-2 weeks
Disorganized process with inconsistent definitionsProcess redesign + documentation$3K-$8K2-4 weeks
Anomaly detection at high volumeRules to filter + model only for ambiguous cases$5K-$12K3-5 weeks

These alternatives aren’t exciting. They won’t generate a stunning slide for the board. But they solve the problem at a fraction of the cost, with less risk, and in less time.

When AI IS the right answer

I don’t want this post to read as anti-AI. We make our living on it. But AI wins when the problem involves ambiguous natural language, pattern recognition in unstructured data, personalization at scale, or reasoning over constantly changing contexts.

If your problem fits there, we’ve written about how to close the utilization gap and about how to deploy AI agents in real operations. That’s the other half of our work — and the part that excites us most.

But that work delivers better results when we first filter out the projects that shouldn’t have been AI in the first place.


I’d rather tell you today that you don’t need AI and earn your trust, than sell you a project that ends up in the 42% graveyard. If you’re not sure whether your next project needs AI or something simpler, that’s exactly the conversation we have in our AI maturity assessment.

Assess whether my project needs AI

Frequently Asked Questions

AI strategy decision making AI ROI enterprise automation technology consulting AI alternatives AI projects

Related Articles

The 100x Employee Already Exists (And Changes How You Hire)
Business Strategy
· 6 min read

The 100x Employee Already Exists (And Changes How You Hire)

One AI-literate professional now produces what used to take a team. Jensen Huang confirmed it at GTC 2026. Here's what it means for your hiring strategy.

artificial intelligence talent hiring
Anthropic Uses Salesforce. Why Don't You?
Business Strategy
· 7 min read

Anthropic Uses Salesforce. Why Don't You?

The most advanced AI companies buy SaaS instead of building it. A framework for deciding when to build and when to buy.

build vs buy technology strategy enterprise SaaS