AI
SMB
Strategy

5 typical mistakes we see in SMB AI projects

Oscar Rovira
7 de maig del 2026
5 min
5 typical mistakes we see in SMB AI projects

The context

77% of AI projects in Spanish companies don't reach production with measurable ROI, according to RAND Corporation's 2024 study. Most failures aren't due to lack of technology — technology today is accessible and cheap — but to errors of framing, expectation and follow-up.

This list collects the five patterns we see most often when helping SMBs put AI into their business. It isn't exhaustive, but it covers the vast majority of cases where a project that looks obvious on paper ends up abandoned three months in.

1. Starting from the model instead of the problem

"We want to apply AI to the business." It's a common opening line in initial meetings. The right question is "what specific problem are we trying to solve?". Without an answer to that question — a measurable problem, with real pain, with a quantifiable current cost — any AI project drifts.

The generic AI model has never solved a problem. The model applied to a concrete problem can solve some. The difference is everything.

2. Underestimating the real cost of rollout

The price of an AI API is visible: so many cents per 1,000 input tokens, so many output. What isn't visible — and often multiplies the final cost by 3 — is the effort of integrating it into the existing operational flow: connecting to the CRM, the billing software, the product database; training staff; tuning the tone over the first weeks; handling exceptions.

An SMB that budgets the project based only on the API cost discovers two months in that they need an integration team they hadn't planned for. Of the total budget, the API is 20%. The rest is integration, data, and follow-up.

3. Not measuring ROI from day one

"We'll see if it works." No, you won't see if you don't measure. Every AI project should have, before the first commit, a concrete metric and an expected numeric value. Without that, three months in there will be an awkward conversation justifying the investment.

Examples of good metrics: bookings recovered per week, no-show rate reduction in percentage points, staff hours freed per month, qualified leads per week. Bad examples: "improving customer satisfaction", "automating tasks", "optimising processes". The difference is that the good ones are numbers you can track progress with; the bad ones are words.

4. Confusing AI with self-service

An SMB rolling out an AI agent often expects this will eliminate the need for human attention. That expectation is almost always wrong. AI handles 70–80% of queries well — the routine ones — and routes the rest to humans. If the SMB doesn't have the flow to receive these escalations with context, the end customer's experience is worse than before (they spend 10 minutes with a bot before reaching a human who lacks the conversation context).

The good design is to think of AI as triage, not as a substitute. It filters the routine, warms up the query, and when it escalates, it does so with full context.

5. Treating the project as a product, not as an operation

An AI project isn't a delivery that ends. It's an ongoing operation: the model drifts over time, customers ask new things, the tone changes, competitors apply new techniques. An SMB that signs an AI project "turn-key" and doesn't budget for maintenance is sowing its own abandonment.

The reasonable model is: 60% initial setup, 40% monthly operation during the first 12 months. After that you can reduce maintenance, but not eliminate it.

What we take from this

The mistakes aren't about AI. They're about framing, expectation and follow-up. Technology is accessible; the discipline to apply it well, less so. If you're considering an AI project for your business, it's worth spending two weeks defining the problem, the metric and the follow-up budget before touching anything technical.

If you want to discuss your case with a team that has seen all five mistakes in practice, drop us a line.

Let's talk