In Greek theatre, there was a reliable trick: when the plot got too complicated, a god was lowered from the ceiling to resolve the problem. The audience loved it, reality simplified instantly. This is how many organisations look at using AI, as a higher power that will figure it out. The difference with theatre: at least there, the problem actually got solved.
AI is not a god. It is merely an extraordinarily capable tool. And like any tool, it only works when you know what you want it to do and can judge whether it did it well. That requires two things most organisations skip: understanding your own processes and understanding the technology.
Know your process before you automate it
There is a persistent instinct to aim AI at the things we understand the least: the black boxes. The departments where nobody can really explain what happens between input and output. The logic sounds reasonable: we will know a great result when we see one, so let AI figure out how to get there.
This is backwards. AI works best on processes you already understand well, because you need to evaluate the output. You need to know when it is wrong. You need to define what "better" means, when and why you escalate. If you cannot do that today without AI, adding AI will just make the confusion faster and more expensive.
Before greenlighting any AI initiative, there is one question that matters: can you describe, in plain language, what this process does today, where it breaks, and what success looks like? If not, that is your first job. Walk over and spend twenty minutes watching someone do the actual work. It will tell you more than a month of steering committees.
Know the technology before you design with it
Understanding processes is only half the equation. The other half is knowing what AI actually can and cannot do. What it is good at, where it struggles, and what kinds of problems it solves. That understanding is what lets you design solutions. It is what makes strategising possible, because strategy requires choosing between options you understand.
Understanding the core capabilities of AI will help you build enough intuition to ask the right questions. When someone proposes an AI solution, can you judge whether the problem is a good fit? When your team presents a prototype, can you see what is missing? That intuition comes from working with AI, even briefly, far more than from reading about it.
Without both, you are not deciding, you are guessing
Without process understanding, you do not know what to automate. Without technology understanding, you do not know how. From that comes the risk of solutioneering: starting with the tool instead of the problem. We have done that for decades. It rarely ends well. So before you approve the next AI initiative, ask two things. Can we explain what this process does and where it breaks? And do we understand the technology well enough to judge whether this solution fits?
If you do not have answers to both, it is not a reason to delay.
Go find out.
It usually takes less time than you think.