Usage data is not evidence of AI value

The most common AI measurement framework in mid-market organisations tracks how many people are using AI tools, how often they use them, and which teams have the highest adoption rates. These numbers are easy to collect, easy to present and almost entirely disconnected from the value AI is supposed to deliver.

Adoption metrics tell you that people have opened the tool. They tell you that usage has spread. They tell you that licences are being consumed. They tell you almost nothing about whether AI capability is building in the organisation or whether that capability will outlast the current cohort of users.

The fundamental problem is that adoption is a leading indicator of adoption, not a leading indicator of value. An organisation can achieve 80% adoption of an AI writing tool and produce work that is uniformly mediocre. Another organisation can achieve 20% adoption in a single workflow and generate material, measurable improvement in that process. The first organisation looks better on a dashboard. The second is building something that compounds.

When leadership asks "are we getting value from AI?" and the answer is a chart showing weekly active users, the measurement system has substituted a proxy for the question. The proxy is comfortable. It is also misleading.

The metrics that reveal whether AI capability compounds

Capability metrics answer a different question: is the organisation becoming more capable because of AI, in ways that persist and build? This requires measuring outcomes rather than behaviours, and institutional results rather than individual activity.

The shift in measurement logic is from "how many people are using AI" to "what is the organisation able to do now that it could not do six months ago?" That question is harder to answer with a dashboard. It requires decisions about what to measure before deployment, not after.

Capability metrics typically cover four domains:

  • Process performance: is the workflow that AI sits inside performing better? Faster cycle times, fewer errors, reduced rework.
  • Decision quality: are decisions made with AI assistance better calibrated? This requires knowing what good decisions look like before AI is involved.
  • Institutional knowledge: is the organisation capturing what AI-augmented work produces, or does it disappear when individuals leave?
  • Compounding rate: are gains from AI use building on each other, or are they flat over time?

None of these metrics are simple to collect. All of them require baseline data from before deployment. That is the structural reason most organisations default to adoption metrics: they are available without any prior investment in measurement design.

Organisations that measure capability rather than adoption make the measurement decision before the deployment decision. They establish what good looks like, record the current state, and commit to a measurement period before AI enters the workflow.

Build your measurement framework before the next deployment

The next time a team proposes an AI deployment, ask one question before approving it: what will we measure to know whether this worked, and what is the baseline we are measuring against?

If the answer involves user counts and adoption rates, the proposal is measuring the wrong thing. If the answer involves specific process outcomes and a plan to capture baseline data before deployment begins, the proposal is ready.

For organisations that have already deployed AI without this framework, the measurement reset is still possible. It is harder but it is possible:

  • Identify two or three AI deployments that are significant enough to measure properly.
  • Establish current performance in those workflows, even if you cannot reconstruct a true pre-AI baseline.
  • Define what improvement looks like and commit to a measurement window of three to six months.
  • Report on outcomes, alongside adoption data, so leadership can see both.

Adoption metrics are not useless. They confirm that tools are being accessed, which matters for licence management and training investment. The error is treating them as evidence of value rather than evidence of access.

Value is measured in what the organisation can now do. Adoption is measured in what individuals have tried. The organisations that compound AI capability know the difference.