The failure is in the process, not the model

MIT Sloan research by Michael Schrage identifies a consistent pattern in AI project failure: the AI component functions correctly. The organisation rejects it. The rejection manifests as workarounds, non-adoption, quiet discontinuation or a return to previous methods after a few months. The post-mortems invariably focus on the technology. The cause is elsewhere.

The cause is that the surrounding process was never redesigned to accommodate what the AI produces. The tool was built and deployed. The workflow it was supposed to fit into remained unchanged. The organisation's existing structures: its accountability arrangements, handoff conventions, review processes and informal decision habits. These treated the AI output as a foreign element and rejected it. The immune system engaged.

This is not a failure of enthusiasm or adoption. It is a design failure. The AI component was specified in detail. The process context that needed to change around it was left as an assumption.

What the immune system is actually defending

Organisations develop implicit operating systems through years of practice: the accumulated habits, role definitions, accountability structures and informal processes that govern how work gets done. These are not irrational. They evolved to solve real coordination problems. When AI is inserted into a workflow without redesigning that operating system, the existing system defends itself.

The defence takes predictable forms. AI produces outputs that do not fit the existing handoffs: they are the wrong format, the wrong level of detail or delivered at the wrong point in the process. Reviewers do not know how to evaluate AI-generated content because the review criteria were designed for human-generated content. Accountability structures do not accommodate AI-informed decisions because decision rights were assigned before AI was a variable. The path of least resistance is to revert to what worked before.

The immune system is not resistant to AI. It is resistant to disruption of the workflow it governs. Workflow redesign that explicitly accommodates what AI produces converts the immune system from a source of rejection into a source of stability: the same structures that previously defended the old workflow now sustain the new one.

Design for the surrounding process, not just the AI component

For each AI deployment under consideration, map the workflow it will sit inside before any technology decision is made. Identify every handoff point, every accountability structure and every decision that requires human involvement.

The redesign question is specific: given what AI will produce at each step, what needs to change in the surrounding process for that output to be used rather than ignored? Which handoffs need to be redefined? Which review criteria need to be updated? Which accountability assignments need to change?

The answers to these questions constitute the implementation work. The AI component is often the smallest part of it. Organisations that treat workflow redesign as the primary design task, and tool selection as a component within that redesign, consistently find that their AI deployments are adopted, sustained and extended. Organisations that treat the AI component as the primary design task and the workflow as a secondary concern consistently trigger the immune system response.