AI rarely fails boards through a single catastrophic event. It fails them quietly, through the accumulation of things nobody thought to check.
In every boardroom I walk into, AI is now on the agenda. That is progress. But what concerns me is not whether boards are talking about AI; it is what they are choosing to worry about. Because in my experience, the risks boards focus on most are often not the ones that cause the greatest damage.
I have seen this pattern before. When ESG first arrived in boardrooms, the initial instinct was to focus on the most visible risk – usually environmental compliance – while overlooking the governance structures needed to manage the whole picture. AI is following the same trajectory. Boards tend to fixate on dramatic scenarios – a rogue algorithm, a major data breach – while six quieter risks erode their position from underneath.
Let me walk you through them.
AI influences decisions across operations, recruitment, finance, and customer service. But when I ask boards who is accountable for those decisions when outcomes are challenged, the answer is rarely clear. Responsibility fragments across IT, operations, data teams, and suppliers. Nobody owns the outcome. That is a governance failure, and it leaves the board dangerously exposed.
AI adoption accelerates through organisations at a speed that most governance processes were never designed for. Annual review cycles, quarterly risk registers, committee schedules tied to the calendar – these are all built for a slower world. By the time oversight catches up with what is actually happening, behaviour is already embedded and very difficult to unwind.
This one frustrates me, if I am honest. I see it regularly: AI is delegated to management or external advisers, and the board treats delegation as if it were the same as understanding. It is not. It masks the fact that directors lack the fluency to challenge or assure decisions influenced by algorithms. Imagine a board delegating financial strategy without any member being able to read a balance sheet. That would be unthinkable. Yet we tolerate the equivalent with AI every day.
I have written about this at length in a separate blog, but it’s worth repeating here. Employees adopt AI tools informally to save time or improve output, and the risk is not eliminated. It is simply driven out of sight. If your governance is unclear or overly restrictive, you are almost certainly creating the conditions for shadow AI to thrive.
Boards assume that having an AI ethics statement or an AI policy equals control. It does not. Without evidence of actual use, clear ownership, and defined escalation routes, these documents provide reassurance rather than assurance. There is a significant difference. A well-crafted policy sitting in a folder is governance theatre. A policy that is lived, monitored, and enforced is governance.
While boards focus on what might go wrong, competitors quietly capture efficiency, cost savings and speed advantages that compound over time.
Now, I often remind boards that these six risks are interconnected. Addressing one in isolation rarely works. Low literacy leads to poor delegation. Poor delegation creates diffused accountability. Diffused accountability feeds shadow behaviour. And all of them together widen the governance deficit.
Effective AI governance recognises the system, not just the symptom. It starts with the board being honest about where the gaps are, and having the courage to close them.
Company Number: 16359543