In 1519, Hernán Cortés set sail for Mexico with eleven ships. After arriving, he burned them. His men had no option to turn back. They had to move forward.
The story is often told as a lesson in commitment. But there is a subtler truth in it. Cortés knew that endless preparation for an unknown coastline would not help him. The only way to learn the terrain was to go there.
AI readiness works the same way. You only learn the terrain by entering it.
The assessment circus
An entire industry has sprung up around AI readiness. Thirty-page questionnaires. Maturity models with five levels. Scorecards that give your organisation a number. Consultants who need six weeks to conclude that you are "not yet ready."
The result is predictable: a report saying your data is not in order, your culture is not ready for change, and your IT infrastructure needs adjustments. All true. All useless as an excuse not to start.
Because here is the uncomfortable fact: no organisation is "AI-ready" before it begins. Readiness is not a prerequisite for action. Readiness is the result of action.
What actually matters
The question is how to get started without getting lost. Three things that genuinely matter, and a hundred that do not.
A sponsor who dares to say no. To the CEO, to vendors, to the team's enthusiasm when things are heading the wrong way. Budget and authority to stop is more important than budget to start.
A defined playing field. Choose a domain. One team. One process. The biggest mistake is starting everywhere at once. Every additional scope item halves your learning capacity. "Customer service" is too broad. "First-line email triage for product complaints" is exactly right. It helps to begin by observing where the friction is rather than brainstorming possibilities in a meeting room.
The willingness to learn. Measure to learn, not to judge. That sounds like a platitude, but the difference is enormous. Measuring to judge creates politics: people start managing results instead of reporting honestly. Measuring to learn creates safety: the team feels comfortable saying when something is not working.
Ninety days
An AI experiment does not need to take long. Ninety days is enough to get from strategic question to decision data. Ten weeks, roughly divided: one week to choose focus, two weeks to discover where the friction is, two weeks to design a solution, two to four weeks to build and test it.
At the end of those ninety days you know more than any assessment could tell you. You know whether your team can handle change. You know whether your data is usable. You know whether AI genuinely adds value to this specific piece of work. And you know it based on evidence, not a questionnaire. An additional signal: look at where employees are already using AI on their own initiative. Shadow AI is a free diagnosis of where the friction is greatest.
The human conversation
What most readiness assessments skip is the human side. What does the team actually think about it?
The most important fifteen minutes of any AI engagement are the fifteen minutes when you ask the team three questions. What excites you about AI? What worries you? What changes about your work if AI takes over thirty percent of routine tasks?
The answers are always revealing. The anxiety is rarely where you expect it. Neither is the enthusiasm. And the willingness to cooperate is directly tied to feeling heard.
Middle management is the key. If team leaders are sceptical, you are not yet ready to start. Not because the technology does not work, but because the people who need to carry it are not behind it. Solve that first.
Burning Cortés' ships was not recklessness. It was the recognition that preparation for the unknown has a point of diminishing returns. At some point you have to go ashore.
That point is now. Choose a small stretch of coastline. Go ashore. Learn the terrain by standing in it.