By Karl George MBE, The Governor, Founder of Governance AI
To help leaders, employees, and directors self-assess their position in this evolving landscape, we present the AI Awareness Compass. This framework is inspired by the Four Stages of Competence and adapted for the era of AI.
Read the descriptions below and identify which paragraph most accurately reflects your current mindset. Your AI journey begins with knowing where you are.
You have a vague sense that AI is important, but it feels like it belongs to someone else’s role, data scientists, engineers, or IT. Terms like “GPT,” “foundation models,” or “generative AI” are unfamiliar or misunderstood. You may not realise that tools like ChatGPT are already embedded in daily apps, or that your colleagues are using AI without oversight.
You’re unaware of regulatory frameworks like the EU AI Act, which classifies AI systems by risk, or the ISO 42001 standard, which guides how organisations govern AI use. Without recognising the ethical, legal, and operational stakes, you may unwittingly expose your organisation to serious risks.
You’ve dabbled in AI. You’ve tried ChatGPT, seen LinkedIn posts about AI’s potential, or attended a seminar. You’re aware that generative AI is different from traditional automation, it learns patterns, generates original outputs, and evolves through training. You’re asking the right questions: What are the limits? How do I use it responsibly? What does good governance look like?
You’re starting to explore compliance and ethical concerns. You’ve heard of the EU AI Act, UK AI White Paper, or OECD AI Principles, but haven’t connected them to your daily work. You may have concerns about hallucination risks, copyright, or fairness, but need clearer guidance.
You’re actively applying AI in your role. You understand the basics of prompt engineering and use tools like ChatGPT, Claude, or Perplexity to accelerate tasks. You know the difference between predictive AI (e.g. forecasting models) and generative AI (text, image, or code generation). You’re considering ethical questions, and you know outputs must be validated.
You’re becoming familiar with AI risk categories,from minimal to high risk as defined by the EU AI Act and you recognise the need for transparency, data protection, and bias mitigation. You’re reading policies or even co-developing frameworks internally.
You don’t just use AI you govern it. You understand AI maturity models, regulatory frameworks, and ISO standards like ISO/IEC 42001. You are proactive in shaping policy, training others, and embedding responsible AI into workflows. You’re confident in evaluating vendor claims, managing bias, and distinguishing between co-creation and automation.
You operate with a balance of technical fluency, ethical judgement, and strategic foresight. You can articulate the difference between a productivity enhancement and a reputational liability. You engage with external standards and consider AI explainability, traceability, and accountability as integral to risk management.
Your position on this compass isn’t a score—it’s a starting point. In this new intelligence economy, what matters most is the ability to evolve. AI is not standing still, and neither can we.
The greatest risk is not from the technology itself but from leaders and organisations operating in unconscious incompetence. The greatest opportunity lies with those who commit to becoming strategic stewards.
Company Number: 16359543