Shadow AI is already in your business. The question is what you do next.
By the time leadership has decided it's time to think seriously about AI, the staff have already started. Studies put the proportion of employees using unsanctioned AI tools at 20–30%. The instinct is to treat this as a governance problem. That instinct misses the signal.
Key takeaway
Shadow AI isn't a crisis — it's a signal. The staff already using unsanctioned tools are your best resource for understanding where AI genuinely adds value in your organisation.
One of the more consistent findings from working with organisations on their AI readiness is this: by the time leadership has decided it's time to think seriously about AI, the staff have already started. Research consistently puts the proportion of employees using AI tools their employer hasn't sanctioned at somewhere between 20% and 30%. In some sectors — particularly professional services and media — the number is likely higher.
Bear in mind that AI adoption across businesses is already uneven and early-stage. If organisations are attempting to integrate AI in a logical, responsible way, their job is made harder if staff continue to use their favourite shadow tools. Unless your IT infosec policy is watertight (unlikely) and personal devices are prohibited near company data (near-impossible), a total shadow AI ban is unenforceable.
The instinct of many organisations is to treat this as a governance problem: something to be shut down, regulated, or at least formally acknowledged before anyone ends up in a policy breach. That instinct isn't entirely wrong. There are real risks in unsanctioned AI use — data handling, confidentiality, the quality of work being produced without adequate review. Your IP is at stake.
But if you lead with restriction, you miss the signal.
Shadow AI exists because people have found something genuinely useful. They are using it at home and bringing it to work because it is making their working lives easier, faster, or more interesting. If your organisation's response is to close that door rather than open a better one, you are not solving a problem — you are creating one. You are telling your people that the organisation is slower, less progressive, and less trusting than they hoped.
The more productive framing is this: what is everyone actually doing with AI, and what does that tell us about where value is being created?
That question drives a very different kind of conversation. It surfaces use cases that leadership might never have considered. It identifies where appetite already exists — which is the hardest thing to create from scratch in any change programme. And it gives you a foundation to build a proper AI strategy on, rather than a blank sheet and a lot of guesswork.
From there, the governance piece becomes much easier. Not because the risks disappear, but because people understand why the guardrails are there. You are not restricting AI use — you are channelling it. That is a much easier argument to make to a team that is already enthusiastic, and these are the people who will determine the success of an AI deployment because they're natural cheerleaders and experimenters.
The practical starting point is an audit — not a punitive one, but a curious one. Spend time with your teams. Find out what tools they are using and what they are using them for. Look at the quality of the outputs. Understand the data they are exposing. You will almost certainly find a mixture of genuinely impressive practice and some things that need to change — but you will be starting from a real picture rather than a policy framework built in a vacuum.
Shadow AI is not a crisis. It is an invitation to lead.
Related reading
This piece was written by Liam at Futureformed. If it sparked a thought, we’d be happy to continue the conversation.
Get in touchAI transparency: This article was written by Liam. The analysis, views, and conclusions are his own.