Multiple research sources in 2025 converge on a striking contrast: 88% of organisations report regular AI use in at least one business function. Only 1% consider their AI strategy mature. This is not a contradiction. It reflects the distance between deploying tools, which is now ubiquitous, and and having the governance, operating model design and measurement infrastructure that converts deployment into compounding institutional capability.
The 87% in the middle have AI use and something that resembles an AI strategy. What they do not have is the structural conditions that make the strategy operational: decisions being made through a governance framework, workflows redesigned rather than augmented, capability measured against pre-deployment baselines, and the capacity to operate and extend AI independently of the original programme team.
The gap is not about technology. The 88% have access to the same tools as the 1%. The gap is about what was built around and beneath the tools.
A mature AI strategy is not a document. It is a set of structural conditions operating in practice.
Governance is operational: decisions about AI workflows, data handling and risk escalation are made through a defined framework with a named owner, not through ad hoc judgment. This governance framework existed before the first significant deployment and has been maintained since.
Workflows are redesigned: the organisation has at least one AI-enabled process that was redesigned from first principles rather than augmented. The redesign produced a different workflow, not a faster version of the old one. The redesigned workflow is documented, transferable and operates without the original design team.
Capability is measured: the organisation can demonstrate AI value against pre-deployment baselines rather than against impressions or adoption rates. There is a measurement framework with defined metrics, documented baselines and a reporting cadence.
The organisation is self-sufficient: a new executive or team member can be brought into the AI operating model without requiring re-education from scratch. The knowledge is in the documentation, the frameworks and the playbooks. Not in the heads of the people who built the programme.
Three questions test strategic maturity without requiring a formal assessment.
If the AI lead left tomorrow, would the governance framework continue to function without them? If a new workflow needed an AI component, does the organisation have a defined process for assessing, approving and deploying it? If the board asked for evidence of AI value, could the organisation produce a measurement comparison against a pre-deployment baseline?
One "no" indicates a specific gap with a specific remedy. Three indicate that the organisation is in the 88%: active AI use and without the operating model infrastructure that would make that use compound. The infrastructure is buildable. The 1% built it deliberately, before deployment and alongside it. The 88% have an identifiable path from where they are to where the 1% sit. It runs through operating model design, not technology investment.