"We need to do something with agents." It is the phrase of 2026. At every conference, in every boardroom, at every vendor demo. AI agents that independently make decisions, rewrite processes, transform organisations. The promise is enormous. The reality is more complicated.
Let us start with what an AI agent actually is. An agent is software that autonomously executes tasks based on a goal. Not a chatbot that answers questions. Not an automation that follows a fixed script. An agent assesses a situation, chooses an approach, executes actions, and adjusts its strategy based on the result.
That is powerful. It is also risky. And the distinction between the two comes down to one word: boundaries.
The autonomy illusion
The market sells full autonomy. AI that independently runs your customer service. AI that takes over your entire quoting process. AI that makes decisions without human involvement.
The problem: most organisations are not ready for that. Not technically, organisationally. Who is responsible when the agent makes a wrong decision? Who checks whether the agent is still doing what it should? How do you explain to your regulator what happened?
The promise of full autonomy is tempting. The reality is that most organisations benefit from something in between.
Agentish AI
We call it agentish AI. Intelligent systems that act autonomously within clear, human-designed boundaries. Smart enough to work independently. Transparent enough to be accountable.
Three layers in every solution:
Functionality. What does the system do? An agent that reads incoming complaints, classifies them by urgency, and drafts a first response. Clearly scoped.
Guardrails. Where does the system stop? Complaints above a certain severity are always escalated to a human. Financial decisions above a threshold require approval. The system may advise, but never act autonomously on high-impact matters. How you build that governance in without creating bureaucracy is critical.
Explainability. Why did the system do what it did? Every action is traceable. Every decision can be explained, to the customer, to management, to the regulator.
That is less exciting than full autonomy. It works.
Where agents add value
Three patterns where agentish AI has the most impact.
Triage and routing. A stream of incoming information, complaints, requests, orders, that needs to be assessed and routed. The agent reads, classifies, and directs. The human handles. The volume a human can process multiplies tenfold.
Monitoring and alerting. An agent that continuously watches: quality of output, deviations in patterns, costs that are climbing. Not reacting to problems, noticing them before they become problems. That is also how you detect drift in existing AI systems.
First drafts. An agent that creates an initial version based on input, a quote, a report, an analysis, that a human then reviews and completes. The human shifts from maker to reviewer. That requires the human to genuinely master the craft, which is precisely the Junior Gap risk.
How to start
Not with "we need to do something with agents." Instead: which process has the highest volume, the most repetition, and the clearest rules? That is where the agent opportunity lives.
Build the smallest working proof. Two weeks. Real data, real users. Measure whether the system does what it should. Threaten to switch it off. Do people complain? Then you have something.
And build the three layers in from day one. Not as an afterthought, not as a compliance exercise bolted on at the end. The guardrails and the explainability are part of the design, just like the functionality.
The market is shouting "agents." The organisations that benefit most are whispering "agentish." They build intelligently, within boundaries, with human oversight. Less spectacular. More results.