When I was invited to speak to fellow headteachers about AI, my first thought was, “What a brilliant chance to learn out loud, hopefully supporting my peers from a fairly prestigious platform provided by the local authority.” My second was, “What on earth do people think I am – an AI expert? Why me – what have I been saying about my AI use? Do they think I’m a lazy, shortcut headteacher?” I’m not. I’m a curious practitioner who’s made plenty of mistakes, noticed what helps, and tried to be honest about what doesn’t. That’s the spirit I’m bringing here too: no hype, no doom – just what’s proving useful, what’s uncomfortable, and how I feel I can lead with integrity and care with AI as a tool. An AI Headteacher? An AI-enhanced Headteacher? Perhaps the latter – let me tell you why.

Confession first

Earlier this year, I found myself drafting a eulogy. In that tender, human space, a tiny voice whispered, “AI could help you say this well.” That was a wake-up call. I realised AI had started to reshape my instincts. Every little AI win I experienced professionally for the preceding months had given me a dopamine nudge: this saved time; this sharpened the message; this made me better. Helpful? Yes. But it also left a faintly “dirty” feeling – had I outsourced something essential?

I don’t think we can talk about good AI use in schools without attempting to explore the psychology. If we want our colleagues to adopt tools safely and sustainably, we need to model the internal checks – not just the external rules.

Three questions I now ask every time I touch AI

Over the last year, I’ve codified three questions that help me use AI consciously, not compulsively:

  1. Will it improve the quality?
  2. Will it save time (and how will I reinvest that time in people)?
  3. How will it make me feel about my work? Will I still recognise my voice, values and judgement?

If I can’t answer those clearly, I don’t use it. And even when I do, I never outsource the final judgement.

The three “methods” that have changed staff conversations

To make this practical and to keep us honest with parents and governors, I’ve been framing AI use around three simple methods. They’re not formal policy language; they’re a shared code colleagues can actually remember and use.

  • Method 1: Human prompt → AI generates → Human refines
    – Great for low-stakes, text-heavy grind: quick procedural updates, quiz questions, tidy minutes from a transcription.
    – Upside: huge time-saver.
    – Watch-out: it’s where the “dirty feeling” creeps in quickest. Use sparingly.
  • Method 2: Human first draft/details → AI develops → Human refines
    – Good for weightier comms, policy summaries, or model texts where your thinking is already on the page.
    – Upside: raises quality and consistency.
    – Watch-out: don’t let AI smooth off all the humanity.
  • Method 3: Human draft → AI critiques → Human improves
    – My default for anything high-stakes: plans, change initiatives, safeguarding comms, HR docs.
    – Upside: keeps authorship, improves rigour, surfaces blind spots, critiques against key policies/guidance.
    – Watch-out: takes discipline (and a bit more time) — but it’s worth it.

When I score (out of 5, with 3 being neutral) each method against the three questions (quality, time, feeling), Method 3 wins for leadership tasks (with an overall score of 12.5 out of 15), Method 2 has a strong place (12 out of 15), and Method 1 is less helpful, but has its place (11 out of 15). And importantly, I noticed all three methods had a net positive (>9) overall score… I’ve found this to be a helpful reflection task for all colleagues to consider.

What AI is genuinely good at in leadership

  • Taming the text-heavy grind. Drafting first passes of letters, newsletters, governor papers, or turning a messy transcript into clean minutes.
  • Policy sense-checks. Cross-referencing school docs against statutory guidance to spot gaps before they become headaches.
  • Safeguarding micro-training. Bite-sized mini quizzes and explanations, grounded in KCSIE language to drip-feed reminders in briefings.
  • Crisis triage. Anonymised scenarios to map next steps against policy and local processes before you pick up the phone.

Used well, this has meant more presence with pupils and staff, sharper communication, and greater confidence that we’re compliant and compassionate.

The bit no one warned me about: personality and “AI pull”

If you’ve ever done Insights or DISC, you’ll know we all bring different energy preferences. Mine isn’t naturally “blue”—methodical, painstaking, detail-first. Guess what AI is scarily good at? Blue tasks. I feel that makes someone like me more susceptible to overuse. Colleagues who are naturally blue often prove to be our healthy sceptics, and I’m grateful for them. Without any data or research that I can find, I am curious as to how building this into training helps teams see why reactions to AI differ and why mixed-energy planning is a strength, not a clash.

Guardrails that protect people (and trust)

I’ve come to believe that ethics plus compliance is the only sustainable foundation:

  • Data minimisation by design. No personal/identifiable data in public models. If in doubt, redact or don’t use.
  • Use tools that don’t train on your inputs and allow deletion/incognito. Configure settings accordingly.
  • Human authorship and accountability. AI never makes high-stakes decisions about pupils or staff; humans review, edit and sign off.
  • Transparency with your community. If parents ask, we can explain not just that we use AI, but how, in plain language, via the three methods.
  • Professional learning and oversight. Short, practical staff training focused on accuracy, bias, tone, and emotional impact; light-touch review through your online safety or digital group.

What this looks like in practice (a few real examples)

  • Minutes that don’t cost your Sunday. Auto-transcribe, then Method 1 to produce a clean summary you can sanity-check in five minutes.
  • Parent comms under pressure. Method 2 to refine tone: clear, kind, and firm, without sanding away your school’s voice.
  • Safeguarding drip-feed. Method 1/2 to build one question a week, anchored in current guidance, to keep everyone alert.
  • HR and recruitment. Method 3 to critique adverts, role profiles and selection tasks for clarity and fairness; and to spot AI-written personal statements that don’t match the human in the interview (we’ve all seen them, or will do soon).

A note on lesson planning

Tempting as it is to “generate me a full lesson on…” I’m steering staff towards Method 3: “Here’s my plan – critique it against X outcomes and Y guidance.” Ownership matters. So does professional growth. AI can be a coach; it shouldn’t be your substitute teacher.

If you’re starting out, try this

  • Pick one leadership task this week and run it through Method 3. Notice what improves, and what still needs you.
  • Ask the three questions before you touch a model. If you can’t answer them positively, don’t use it.
  • Make a 1-page “Ways We Use AI Here.” Use the three methods as your spine. Share it with staff and governors; be ready to share it with parents.
  • Pair opposites. Match a meticulous, detail-focused colleague with a big-picture thinker. Compare outcomes with and without AI.