McKinsey's 2025 State of AI report puts the number plainly: only 26% of organisations globally generate tangible value from AI. That figure has held flat for two consecutive years, despite accelerating investment across every sector and market. Budgets are growing. Tools are improving. The share of organisations capturing value is not moving.
The gap is not closing because the organisations in the 74% are making the same type of error repeatedly. They are investing in AI as a technology procurement decision. They are selecting tools, deploying them, measuring adoption, and then finding that the results stay with the individuals who ran the deployment rather than compounding across the institution. The investment was real. The returns are not.
This is a structural condition, not a motivational one. The 74% are not failing because they lack ambition or resources. They are failing because they are not making the four operating model decisions that separate AI deployment from AI value.
The organisations generating tangible AI value share four structural conditions. These are operating model decisions. They precede and determine the outcome of every technology decision.
A specific AI strategic thesis. The 26% have made explicit what the organisation will use AI to achieve and, equally important, what it will not attempt. The strategic thesis names the business problem, specifies the value lever and sets the scope. Without it, every AI proposal looks equally valid and the organisation invests in everything while committing to nothing.
Named governance ownership that predates deployment. In the organisations generating value, a person with named authority over AI decisions was in place before the first significant deployment. Governance was designed before it was needed, not in response to a problem. This changes the quality of every deployment decision that follows: there is a framework to apply and a person accountable for applying it.
Workflows that were redesigned rather than augmented. The 26% did not insert AI into existing processes. They asked what the process should look like given what AI enables, then redesigned it. This is a fundamentally different exercise. Augmentation produces faster versions of old workflows. Redesign produces workflows that create value the old ones could not.
Measurement baselines established before deployment. Value requires a before. Organisations in the 26% captured their operational baselines before the AI went live, against the specific dimensions the deployment was intended to improve. This makes the return case verifiable rather than impressionistic.
Before the next AI investment decision, four questions determine which side of the divide the investment is likely to land on.
Does the organisation have a named AI governance owner: one person with authority to make decisions? Have any workflows been redesigned for AI rather than augmented by it? Is a measurement baseline in place for the workflow the investment will affect? Does the AI strategy specify what the organisation will and will not attempt?
Two or more absent answers place the investment firmly in the 74%. The conditions are not difficult to build. They are a design problem, not a capability problem. The organisations that build them first find that the technology decisions that follow become significantly easier to make and significantly more likely to generate the return the investment was intended to produce.