Insights from Hakkoda’s latest research reveal a widening gap between agentic AI ambition and enterprise readiness.
For the past several years, enterprise AI has been framed as a question of potential. What could AI automate? What decisions might it improve? And more recently, what happens when AI systems begin to act with greater autonomy?
That phase is ending.
Enterprise AI has moved decisively from experimentation into production. Most organizations are no longer testing whether AI works. They’re confronting a more uncomfortable reality: scaling AI is proving far harder than deploying it. And as agentic AI enters the picture, that execution gap is becoming impossible to ignore.
Recent research from IBM firm Hakkoda underscores just how uneven this transition has been. While AI is now embedded in day-to-day operations at many organizations, less than half of organizations say they are orchestrating AI across functions (42%) and only 16% report having operationalized AI enterprise-wide. True enterprise coordination remains a rarity— more than 40% remain limited to function- or unit-level deployments.
This disparity matters because agentic AI changes the nature of the challenge. Autonomous and semi-autonomous systems don’t thrive in silos. They depend on interoperability, shared context, clear governance, and the ability to coordinate action across systems, teams, and data sources. In other words, agentic AI doesn’t just demand intelligence. It demands enterprise-level design.
On the surface, enthusiasm for AI appears overwhelming; nearly 90% of executives report deploying AI to drive process transformation. Yet when asked about outcomes, only 19& say AI has delivered end-to-end workflow change. The gap between intent and impact is striking, and it reflects a broader pattern: many organizations are using AI to optimize within existing structures rather than rethinking how work actually flows across the enterprise.
It’s observable at the business model level as well. While a majority of executives say AI is driving at least incremental changes to how their businesses operate, only a small minority report full reimagination of their business models. Most remain somewhere in between, experimenting with localized improvements while stopping short of structural change.
Agentic AI puts pressure on that middle ground. Systems designed to sense, decide, and act across boundaries can’t be bolted onto fragmented processes. They require enterprises to move from tool-centric thinking to systems thinking, where orchestration matters more than isolated optimization.
What’s becoming clear is that agentic AI puts pressure on that middle ground. Systems designed to sense, decide, and act across boundaries cannot simply be layered onto fragmented processes. They require enterprises to move from tool-centric thinking to systems thinking, where coordination and orchestration matter more than isolated optimization.
The data also challenges the idea that enterprise AI hasn’t delivered value. A strong majority of executives report that AI met or exceeded revenue expectations over the past year, and more than 80% say it contributed to operating margin improvements (averaging an impressive 8%). In some industries, such as financial services, organizations report significant reductions in revenue leakage attributable to AI.
At the same time, failure remains a material part of the story. Nearly one third of AI initiatives were cancelled, postponed, or failed to scale in the past year, and the reasons for that are quite telling. Inadequate skills account for a meaningful share, but security, ethical concerns, and weak governance feature prominently as well. In fact, one in four unsuccessful projects is attributed directly to governance shortcomings.
AI success is no longer binary. The question is no longer whether AI can deliver value, but whether organizations can realize that value consistently, without introducing unacceptable risk, cost, or fragility. Agentic AI raises the stakes by increasing both the upside and the consequences of poor execution.
Perhaps the most counterintuitive insight from the research is the role governance plays in enabling scale. Organizations with more mature AI governance report measurable improvements across multiple dimensions: better security outcomes, higher AI adoption, and significant efficiency gains directly attributed to governance practices.
This runs counter to the long-standing belief that governance slows innovation. In practice, clear accountability, explainability, and guardrails appear to increase confidence, reduce rework, and accelerate time to value. As AI systems become more autonomous and interconnected, governance shifts from a compliance exercise to a coordination mechanism, helping enterprises move faster without losing control.
This is especially relevant as executives cite growing concern over risks associated with AI, including hallucination, misuse, security, and lack of transparency. Agentic systems amplify these risks precisely because they operate across boundaries and act with greater independence. Mature governance doesn’t eliminate uncertainty, but it creates the conditions under which autonomy can scale responsibly.
Agentic AI is also changing how enterprises think about infrastructure. Nearly half of executives say they are turning to cloud platforms to handle the variable and often unpredictable computing demands AI introduces, while a similar share are reassessing their infrastructure at an enterprise level to understand where readiness gaps exist.
What’s driving this shift is a growing recognition that agentic systems place different demands on the enterprise. They require architectures that are open by design, secure by default, and flexible enough to support a mix of compute-intensive models and lighter, task-specific agents without disrupting existing systems.
Hybrid architectures are emerging as a pragmatic response. They allow organizations to balance scalability with regulatory control, and internal optimization with ecosystem connectivity. This balance becomes critical as agentic systems interact across organizational and external boundaries.
Yet here again, confidence may be outpacing reality. While most leaders believe their data and governance frameworks are future-ready, fewer than one-third of organizations have actually implemented the interoperability and scalability features agentic AI demands. Among the most mature organizations, seamless model integration and scalable infrastructure are common. Among the least mature, they are not.
The result is a widening maturity gap that agentic AI will only accelerate.
The next phase of enterprise AI will not be defined by novelty or isolated wins. It will be defined by execution. Agentic AI is already reshaping expectations around autonomy, coordination, and speed. But it’s also exposing the limits of fragmented operating models and under-designed systems.
The organizations that succeed will be those that treat AI as an enterprise capability rather than a collection of tools. They will invest not only in models, but in governance, architecture, and operating discipline. They will measure success in terms of sustained outcomes, not pilot performance. And they will design for coordination from the outset, knowing that intelligence alone does not scale.
The question enterprises must now ask is no longer where AI can be applied, but how it can be operationalized safely, repeatedly, and at scale. Agentic AI doesn’t just change what systems can do. It changes what organizations must become to use them well.
Leigh Bryant
Editorial Director, Composable.com
Leigh Bryant is a seasoned content and brand strategist with over a decade of experience in digital storytelling. Starting in retail before shifting to the technology space, she has spent the past ten years crafting compelling narratives as a writer, editor, and strategist.