This interview features Karl George MBE, known as “the Governor”, a leading governance expert and founder of The Governance Forum and Governance AI. Building on his white paper on the AI Reset Revolution, he reflects on how the Intelligence Age is transforming decision-making, board oversight, and the very production of intelligence. In this conversation, he explains why AI governance must go beyond platforms and algorithms to encompass systems, processes, people and culture – and what boards need to do now to be ready.
When I talk about the AI Reset Revolution, I describe it as a fundamental shift in the production of intelligence, not just the next wave of digital tools. For the first time, we have scalable cognition on tap: systems that can draft, analyse, simulate and advise at a speed and breadth no human team can match. That changes how strategies are formed, how risks are assessed and how value is created.
In the white paper I describe three shifts that boards need to grasp. The first is the sheer pace of change: AI capability is moving so fast that traditional governance cycles and planning horizons are struggling to keep up. The second is the growing responsibility of the board: the technology is accelerating, but many boards are standing still, which creates a widening gap between what AI is doing in the organisation and the board’s ability to oversee it. The third shift is about safe implementation, which depends on the AI literacy of leadership. If executives and non-executives do not understand, at a practical level, how these systems work, where they can fail and how they should be controlled, it is impossible to deploy AI safely and responsibly at scale.
Taken together, these shifts mean we are entering an Intelligence Age, not managing an IT upgrade, and boards need to absorb that message. That is what I am trying to capture and explain in the white paper.
Through that reset lens, the biggest strategic opportunity I see is the ability to combine human judgement with machine-scale analysis to create better decisions, faster. Boards can use AI to enhance foresight, stress-test strategies, personalise services at scale and unlock entirely new business models. The organisations that treat intelligence as a strategic asset, not a back-office function, will outpace their peers.
The greatest risk, however, is the governance deficit. Too many boards are still delegating AI to IT or innovation teams, without clear accountability, an agreed risk appetite or ethical guardrails. That is made worse by what is called shadow AI: staff and teams quietly adopting consumer tools or building unapproved automations that sit completely outside the organisation’s policies, controls and risk registers. Data leaks, biased outputs and poor decisions often come from these shadow uses, not the official programmes.
In my view, the most underestimated risk is this combination of over-reliance on systems directors do not understand and under-investment in governance and literacy, while shadow AI is growing underneath the radar.
I often say to boards: do not tell me about pilots and PowerPoint, show me your architecture of control. Readiness is not the number of experiments under way; it is the degree to which AI is integrated into your strategy, risk framework and governance model. I would expect to see a clear AI strategy aligned to the business plan, defined risk appetite statements for AI, and an inventory of material use cases that are actually in production.
There should be explicit board and executive accountabilities, clear escalation routes when something goes wrong, and evidence that AI-related risks are embedded in the enterprise risk register rather than sitting in a silo. On top of that, I put a huge premium on AI literacy at leadership level. That does not mean every director is a data scientist; it means they understand enough about how these systems work, where they fail and what questions to ask. If leaders cannot interrogate assumptions, challenge over-claiming or spot when AI is being over-trusted, they cannot honestly say they are ready for the change that is coming.
If all you can show me is glossy decks, a couple of proofs of concept and a room full of people who feel out of their depth, you are in comfort mode, not readiness.
Culturally, the most important shift is from seeing AI as a cost-cutting device to seeing it as a human-centred co-pilot. Leaders need to create an environment where people feel that AI is there to enhance their capability, not quietly replace them. That means being honest about change, investing in skills and designing roles where human judgement, creativity and empathy remain central.
For me, having a human in the loop for high-stakes decisions is non-negotiable, and we need to be very clear about where that human sits in each workflow, not just assume they are “somewhere in the process”. That is one of the reasons I developed the concept of an AI Transparency Index: a way of mapping and labelling where and how AI is involved in decisions and content so boards, staff, customers and regulators can see the degree of automation and the point of human oversight.
In practical terms, I look for organisations that talk about dignity, well-being and fairness in the same breath as productivity and efficiency. If the only story employees hear is “automation and savings”, you will trigger resistance and anxiety. If they can see clearly where AI is used, where humans remain in control and how they can challenge or appeal a decision, you build trust and responsible adoption.
Using the TGF methodology, I frame AI governance as an extension of good governance, not a separate discipline. On the compliance side, boards need the right resources, structures and documents: an AI strategy, a clear policy framework, defined roles and terms of reference, and a competent, diverse board that understands its responsibilities in this new context. Execution matters: AI must be built into normal decision-making, not left to side projects.
On the performance side, the same framework applies: transparency about where and how AI is used, clarity on the impact it is having on stakeholders and results, and close attention to behaviours in the boardroom and across the organisation. Are we asking tough questions or rubber-stamping? Are we willing to pause or redesign an AI use case if the behaviour or outcomes feel wrong, even if the numbers look attractive? That blend of compliance and performance is where AI starts to create real, sustainable value.
When I define AI governance as oversight of systems, processes, protocols and decision-making, I am deliberately widening the lens. Every board, in my view, now needs at least six guardrails. First, a board-approved AI strategy and policy that set out where the organisation will and will not use AI. Second, a named executive owner and a clear board committee remit so accountability is not diffuse. Third, an AI risk appetite and a set of key risk indicators embedded in the main risk framework.
Fourth, a register of material AI use cases, tagged with something like an AI Transparency Index so you know the level of AI involvement in decisions and content. Fifth, “human in the loop” rules for high-stakes decisions, with clear rights of review and redress. Finally, independent assurance over critical systems, covering data, bias, security and ethics. Those are the practical levers that turn a broad definition of AI governance into lived practice.
We are on the cusp of a transformation that will fundamentally alter how boards operate. Within the next three years, I expect to see AI agents attending meetings on behalf of executives, preparing reports, responding to questions in real time, and even participating in negotiations within defined mandates. We may see AI-powered observers sitting in boardrooms to identify risks, flag conflicts of interest, and raise compliance alerts as they happen. These are not science fiction scenarios they are on the near horizon.
The question is not whether AI will become more deeply embedded in governance processes, but how we will manage that transition. Boards must act now to establish the frameworks that will ensure this evolution strengthens rather than undermines effective governance.
Boards should work on the assumption that AI agents and AI observers will soon be present in and around the boardroom, and prepare for this shift now rather than later. A first priority is transparency. Directors need clear rules requiring disclosure whenever AI has contributed to board papers, analysis or recommendations. Board members should understand not only that AI has been used, but how it has been used and where its judgement may be limited, so that human judgement is informed rather than quietly displaced. The AI Transparency Index™ provides a starting point, offering a classification system that defines the level of AI involvement in content and decisions.
At the same time, fiduciary responsibility needs to be interpreted in a more modern way. Directors’ duties of care, skill and diligence now include a responsibility to understand, at a practical level, how AI systems are shaping advice and decisions. This does not mean becoming technologists, but it does mean asking better questions, being clear about assumptions and risks, and deciding explicitly which decisions can be supported by AI and which must remain firmly human-led.
Finally, boards must put accountability and boundaries in place before things go wrong. AI cannot be accountable, so responsibility must always sit with people. Boards should be clear about who is responsible when AI-informed advice is relied upon, and where ultimate ownership lies. They should also set limits on AI use, keeping ethically complex, high-stakes and values-based decisions in human hands, while allowing AI to support analysis where it adds value. In doing so, boards can benefit from AI’s insight without losing the judgement, responsibility and moral authority that sit at the heart of good governance.
Company Number: 16359543