Choosing an AI consultancy: the question nobody asks

There are hundreds of agencies doing AI projects. Most deliver a report. The good ones deliver a decision.

By The Only Constant
People & Organization

In 1854, a cholera outbreak struck London. The established medical science was certain: it came from bad air. Miasma, they called it. A convincing story, widely accepted, completely wrong.

John Snow, a physician who preferred looking over reading, went out into the streets. He mapped where the sick people lived. The pattern pointed to a water pump on Broad Street. Snow removed the pump handle. The outbreak stopped.

The difference between Snow and his colleagues was simple. They reasoned from theory. He started with the data.

When choosing an AI consultancy, that is precisely the dividing line that matters.

The report problem

Most AI engagements at consultancies follow the same script. A team arrives. They interview ten people. They produce a report with findings, a roadmap, and a priority matrix. The report gets presented to the steering group. Everyone nods. The report goes in a drawer.

It is the pattern we encounter in organisation after organisation. Not because those consultancies do poor work. But because a report is not a decision. A report is a postponement of a decision, packaged as progress.

The fundamental problem: most organisations that hire an AI consultancy already know where the pain is. They need someone to help them move, not someone to confirm that there is indeed pain.

What you are actually looking for

A good AI consultancy makes itself redundant. Quickly. That sounds like a contradiction, but it is the most important selection criterion that almost nobody applies.

Three questions that reveal more than any reference call.

Where is the evidence? Ask for working examples. Not case studies in a PDF, working systems you can actually see. A consultancy that only shows reports probably only delivers reports. A consultancy that shows prototypes builds prototypes.

How fast is the first result? If the answer is "after the eight-week analysis phase," keep looking. The value of AI experiments lies in speed. Two weeks to put something working in place. That something delivers the information needed to decide: continue, adjust, or stop. Decision data, not slideware.

What happens if it fails? This is the question that generates the most tension. A consultancy that becomes uncomfortable at the word "stop" is a consultancy that benefits from ongoing uncertainty. The good ones are perfectly happy to say: this isn't working, here's what we learned, stop this and try that.

Where the market misleads you

The AI consultancy market is growing fast, and that attracts two types of providers.

The first type sells tools. They have a partnership with a platform and their advice suspiciously often leads to that one platform. Just sales with an analysis phase in front of it.

The second type sells complexity. The more complicated the engagement, the longer the assignment. Forty-page governance frameworks. Roadmaps running to 2028. Risk analyses so extensive they become a risk themselves. That is Solutioneering from the supply side: the tool sells itself, the problem comes later.

The question that helps: would this consultancy also advise me to do nothing? If the answer is no, if the recommendation is always "more", then the business model is the problem.

How to get started without getting lost

Three principles that help with any AI engagement, regardless of who runs it.

Pick a fight. The biggest mistake is starting everywhere at once. Choose one domain, one team, one process. Go deep rather than broad. Every additional scope item halves your learning capacity.

Demand working things. Presentations are not a result. Reports are not a result. A working prototype that real people use during their normal working day, that is a result. It does not need to be finished. It needs to be real. A good AI proof of concept delivers exactly that: decision data, not slideware.

Measure to learn. The reflex is to measure whether AI "works." That is the wrong question at the start. The right question: what have we learned? Which assumptions held and which did not? Measuring to learn creates safety. Measuring to judge creates politics. And only scale when the evidence is there, not when the enthusiasm is. That principle, proof before scale, protects against the most expensive mistake in AI adoption.

John Snow was initially ignored. The establishment had a theory, and that theory was comfortable. But Snow had data. And data always beats comfort, if you are willing to look.

The right AI consultancy helps you look. At your own processes, your own people, your own data. And then build. Fast, small, measurable. Until you know enough to decide for yourself.

Ready to get started? Begin with an AI Workshop to give your team the foundations. Or start an AI Automation Track to discover where AI genuinely adds value in your organisation.

Frequently Asked Questions


More from The Only Constant

View all blogs