The biggest AI risk your board faces is not the system you approved last quarter. It is the one someone in finance started using three months ago that nobody knows about.
I have this conversation with boards regularly, and it almost always catches them off guard. They have spent time debating whether to adopt an AI platform, setting up working groups, commissioning due diligence. Then I ask a simple question: do you know how many of your people are already using AI tools right now, without any of that governance around them?
The room usually goes quiet.
This is what I call shadow AI, and it is one of the most underestimated risks in organisations today. It emerges when employees adopt tools informally to save time, improve their output, or simply keep pace with what they are seeing elsewhere. It is rarely malicious. In fact, it is usually well intentioned. But it sits completely outside governance, oversight, and control.
Now, here is why that matters at a board level. Shadow AI bypasses the very mechanisms boards rely on for assurance. Data may be uploaded into external tools without approval. Decisions may be influenced by outputs that nobody has validated. Intellectual property, confidentiality, and regulatory obligations can all be compromised, and not a single person set out to do anything wrong.
In my experience, shadow AI thrives in one of two environments. The first is where governance is unclear, where people genuinely do not know what is permitted. The second is where governance is overly restrictive, where people believe AI use is prohibited or politically sensitive, so they experiment quietly. Both produce the same result: a false sense of control at board level while behaviour moves elsewhere.
I often draw a parallel here with how boards handled social media in its early days. Organisations that banned it outright did not stop their people using it; they simply lost visibility of what was being said. The ones that created clear guidelines, that defined what was acceptable and gave people a framework to work within, those were the ones that managed the risk effectively. AI governance works the same way.
So boards face a delicate balance, and it is one I help them navigate constantly. Too little governance invites chaos. Too much rigidity drives use underground. The role of the board is to define what I call safe territory: where AI can be used, for what purposes, with what safeguards, and under whose authority.
Effective AI governance does not suppress responsible use; it legitimises it. Clear principles, visible ownership, and proportionate guardrails reduce the incentive for shadow behaviour. When people know the rules of the road, they are far more likely to stay on them.
And let me be clear: if your board cannot answer the question “where is AI being used in this organisation right now?” with confidence, then the governance deficit is already wider than you think.
Ready to close the governance deficit? Register for the AI Wake-up Call, A one-day immersive experience that equips board members to keep pace with AI change, using governance as the defence for confident, accountable adoption.
Company Number: 16359543