The common approach to AI adoption follows a consistent sequence: identify a use case, select a tool, deploy it, measure adoption. It feels rational. It produces adoption metrics and rarely produces business value.
Research by Davenport and Westerman into the structural characteristics of AI leaders and laggards surfaces a consistent finding: the differentiator is not algorithm quality, model capability or the sophistication of the tool. It is operational model integration: what sits beneath the tool. The organisations generating value built the conditions for value before they selected anything. The organisations that did not are generating activity.
Tool selection is a consequential decision. But it is consequential as step five, not step one. The four decisions that precede it determine whether the tool, once deployed, creates institutional capability or individual results that disappear when the individual moves on.
Four layers sit beneath any AI tool and determine whether it generates compounding institutional value or individual results.
Strategic intent. What is the organisation actually trying to change? A specific, documented answer to this question governs every tool and workflow decision that follows. Without it, individual deployments optimise for local convenience rather than strategic outcome, and the organisation accumulates tools that do not add up to a capability.
Governance. Who owns the decisions that govern how AI is used? Who classifies risk, approves workflows, sets data handling rules and reviews the framework as deployments evolve? Governance that exists before a problem arises governs effectively. Governance assembled in response to a failure governs reactively and incompletely.
Workflow design. Has the surrounding process been redesigned for what AI produces? An AI tool inserted into an unredesigned workflow produces faster versions of the same process. A workflow designed around what AI enables produces different outcomes that were previously impossible. The design decision happens before deployment and determines what the tool is capable of creating.
Measurement. Was a baseline captured before the tool went live? Value requires a before. Organisations that establish operational baselines before deployment can demonstrate what changed and by how much. Organisations that measure after the fact are measuring against memory, which is an unreliable comparator and an unconvincing evidence base for the board.
For any AI tool currently deployed in your organisation, the diagnostic is straightforward: is there a documented workflow that governs how it is used? Is there a named owner for the decisions it informs? Is there a baseline that predates its deployment?
If these layers are absent, the tool is producing individual capability. When the individual changes roles or leaves, the capability leaves with them. The tool remains. The value does not.
Building the missing layers is a design exercise, not a technology exercise. The tools already deployed are a starting point. The operating model built beneath them is what converts deployment into institutional capability that compounds independently of any individual.