Change Management

AI isn't failing your business. Your change management is.

Liam·February 2026·5 min read

Some AI pilots succeed. Most AI programmes don't. The gap between them is rarely in the technology — it's in whether the people who need to use it actually do. This is a change management failure, and it is remarkably common.

Key takeaway

Most AI programmes don't fail at the technology layer — they fail at adoption. Change management isn't the soft bit at the end of a project; it's the most important discipline from day one.

There is a pattern that shows up in organisations that have invested seriously in AI and are not seeing the results they expected. The technology works. The model is accurate, the integration is live, the pilot delivered some results (and left some open questions). And yet, three months later, adoption is lower than anyone projected, the team has quietly drifted back to their old ways, and the business case is looking increasingly optimistic. The CEO / CFO / COO starts asking difficult questions about when “this amazing AI thing” will deliver a ROI.

This is not a technology failure. It is a change management failure. And it is remarkably common.

The reason is structural. Most AI programmes are scoped, budgeted, and governed as technology projects. The success criteria are technical: is the model deployed? Does it meet the accuracy threshold? Did the API go live on time? These are the wrong measures, or at least they are incomplete ones. The real question — are people using this, and is it changing how value is created? — often does not appear on the project plan until it is too late.

Change management is not the soft bit at the end. It is not a communications cascade and a training session. Done properly, it starts before the technology is selected — with a clear understanding of who will be affected, how their role will change, what they stand to gain, and what they are legitimately concerned about. Those concerns are rarely irrational. People who push back on AI adoption are not always afraid of change; sometimes they have identified a real problem with how the technology is being deployed, and they deserve to be heard. If you’re committed to responsible use of AI, when people cite — for example — concerns around water usage and biases, you absolutely need to listen.

The organisations that get AI adoption right tend to share a few characteristics. They invest in stakeholder engagement early and honestly, without overselling what the technology will do. They design for the people who will use the system, not just for the system itself. They treat resistance as information rather than obstruction. And they measure success by outcomes — what changed in the business — rather than by outputs — what was delivered by the project.

There is also something to be said for the relationship between trust and adoption. People use tools they trust, and they trust tools they understand. An AI deployment that arrives with inadequate explanation, opaque decision-making, and no obvious way to escalate errors will not be adopted — regardless of how good the underlying technology is. Buying 50 AI licences for 100 staff, rolling it out on a Monday morning and hoping they’ll suddenly become 100% more productive is a recipe for pain and more of those difficult questions.

If you are running an AI programme, or thinking about starting one, the most important question you can ask is not ‘which tools should we use?’ It is ‘how are we going to bring our people with us?’ Everything else then follows.

This piece was written by Liam at Futureformed. If it sparked a thought, we’d be happy to continue the conversation.

Get in touch

AI transparency: This article was written by Liam. The analysis, views, and conclusions are his own.