The governance gap
As businesses wake up to the reality of Shadow AI, many are rushing to put governance frameworks in place. The intentions are good — protecting client data, ensuring compliance with incoming regulations like the EU AI Act, and maintaining intellectual property rights.
However, in our experience helping organisations audit and govern their AI usage, we see the same mistakes being made repeatedly. Governance that looks good on paper often fails in practice if it's not implemented correctly.
Here are the five biggest mistakes companies make when implementing AI governance, and how to avoid them.
1. Writing a policy without doing an audit
This is the most common mistake we see. A business decides it needs an AI policy, so someone in legal or IT drafts a comprehensive document outlining what can and cannot be done.
The problem? They have no idea what tools staff are actually using.
If you write a policy that bans tools your staff rely on daily, or fails to address the specific ways they're using AI, the policy will be ignored. You cannot govern what you cannot see. Always start with a Shadow AI audit to understand the reality on the ground before writing the rules.
2. Treating AI as a purely technical problem
AI governance is often handed entirely to the IT department. While IT plays a crucial role in securing systems and managing access, AI governance fundamentally involves human behaviour, legal risk, data protection, and HR policies.
Effective governance requires a cross-functional approach. Your working group should include representatives from legal, compliance, HR, and front-line operations. If only IT is involved, you'll end up with technical controls that block productivity or fail to address the nuance of how AI is used in different departments.
3. Creating a culture of prohibition
When faced with unprecedented risks, the natural reaction of many leadership teams is to ban everything. "No AI tools until further notice."
This approach doesn't work. It simply drives Shadow AI further underground. Staff who have experienced the productivity gains of tools like ChatGPT or Claude will continue to use them, but on personal devices or hidden accounts.
The goal of AI governance shouldn't be to stop the use of AI. It should be to enable its safe use. Be clear about what is prohibited, but equally clear about what is approved and encouraged.
4. Failing to provide alternatives
If you ban a tool that staff rely on without providing an approved alternative, you are creating friction that will inevitably lead to non-compliance.
For example, if you stipulate that public consumer AI models cannot be used for drafting client emails because of data privacy risks, you must provide a secure, enterprise-grade alternative (like an approved enterprise license that guarantees zero training on corporate data). A policy of "stop doing that" must be paired with "do this instead."
5. Setting and forgetting
AI technology is moving faster than any technological shift in history. A policy written six months ago is already out of date. New capabilities, new tools, and new regulatory requirements emerge constantly.
AI governance is a continuous process, not a one-off project. Your policies, tool registers, and training materials need to be living documents. We recommend scheduling formal reviews at least quarterly, and ensuring there is a dedicated person or committee responsible for staying across new developments.
Getting it right
Effective AI governance doesn't have to be complicated, but it does require a realistic approach. Start by understanding what's actually happening in your business, bring the right people to the table, and focus on enabling safe use rather than blanket prohibition.
The businesses that get this right won't just mitigate their risks — they'll gain a competitive advantage by empowering their teams to use AI confidently and securely.