MIT research into organisational AI adoption finds a consistent pattern: the same AI tool, deployed simultaneously in multiple functions of the same organisation, produces materially different outcomes across those functions. The difference is not in the tool. It is not in the quality of the people using it. The explanatory variable, consistently, is the quality of the process design surrounding the AI component. This within-organisation variance is underappreciated. Most AI assessments compare organisations against each other. The more instructive comparison is within the same organisation, where the technology is identical and the process design is not.
Functions where AI produces strong results share a design characteristic: the workflow was redesigned to accommodate what AI produces, the human checkpoint is specific enough to hold under pressure, and the output quality bar is documented so practitioners know what passing looks like. Functions where AI underperforms typically inserted the tool into an existing process, left the human checkpoint as an implicit review, and have no documented quality standard. The tool is identical. The surrounding design determines whether it creates value.
This means the organisation already has evidence of what effective AI process design looks like. It exists in the high-performing function. The gap between the high-performing and underperforming functions is a design gap, not a technology gap. The high-performing function provides a living blueprint. If the organisation can make that design explicit and transfer it across, the variance collapses.
If AI is deployed across more than one function, measure the variance in outcomes between them. The function with the strongest results has, implicitly or explicitly, built better process design around the tool. Make that design explicit: document the workflow, the checkpoint criteria and the quality standard. Then assess whether it can be transferred to the underperforming functions.
The within-organisation variance is the organisation's own evidence base for what process design quality is worth. It is more persuasive than any external benchmark, and it points directly to the specific design elements that need to be built in the functions where AI is underperforming.