AI Insights Focus: KCSIE 2025 and the AI Gap

AI Insights Focus: KCSIE 2025 and the AI Gap

KCSIE 2025: What It Covers, What It Misses, and What We Can Do About It (Together)

You probably saw it land in your inbox. Another year, another iteration of Keeping Children Safe in Education. The 2025 update is here, and yes, it brings some progress. But if, like me, you’ve been gearing up for a new wave of staff CPD, parent comms and safeguarding audits focused on AI, you might have opened KCSIE 2025 with a mix of hope and caution.

Turns out, that caution wasn’t misplaced.

Let’s be clear: this isn’t about slating KCSIE. It’s a vital document, and the new version gets a lot right. But in 2024 the IWF saw a 380% rise in AI-generated CSAM reports and schools are grappling with deepfake bullying, sextortion and algorithmic harm, we were hoping for a bit more backup.

This post is a companion piece. Think of it as a supportive sidekick to KCSIE 25. Here to help plug the gaps and offer a roadmap for the things the guidance hints at but doesn’t quite say out loud.

I’d already been deep in the weeds prepping for AI safeguarding CPD and a few keynotes on AI safeguarding I’ve been asked to deliver next academic year. So, this is part analysis, part working notes turned resource.

Use what helps. Skip what doesn’t.

The Good: Where KCSIE 2025 Shows Up for AI Safeguarding

Let’s start with what is there. And credit where it’s due, there are some solid steps:

  • Online Harms, Now with Added Realism: Paragraph 142 explicitly mentions "misinformation, disinformation (including fake news), and conspiracy theories". It’s a clear nod to the algorithmic, AI-fuelled media ecosystem kids are growing up in.
  • Signposting the DfE’s AI Product Safety Guidance: Paragraph 143 gently waves you toward another document, one that outlines some practical expectations for safe use of AI in schools. It’s useful, if you know where to look and have the time to dig in.
  • Cybersecurity Namecheck: Paragraph 144 reminds schools to comply with the Cyber Security Standards for Schools and Colleges. AI isn’t mentioned here directly, but if staff or students are uploading data to public AI tools, this bit matters.

So far, so sensible. But it’s also a bit safe. A bit cautious. Which would be fine if schools weren’t being hit with deeply unsafe things every week.

The Gap: What KCSIE 2025 Doesn’t Say (But Urgently Needs To)

1. No Clear Ask for an AI Policy

We’re past the point where AI guidance is school is optional. Staff are using it to plan lessons, sometimes to mark essays and write reports. Students are using it to experiment, explore, and sometimes push boundaries. Some are using it to generate images of themselves, even classmates.

And yet, nowhere in KCSIE are schools asked to have an explicit AI policy. There’s no guidance on acceptable use, no mention of tools that shouldn’t be used with pupil data, and no hint of training expectations for staff.

What would help? Just a paragraph saying:

Schools should have a clear and regularly updated policy on the use of generative AI tools by staff and pupils, aligned with data protection and safeguarding frameworks.

Failing that, you can still do this yourself. Slot it into your current digital strategy policy. Flag it in staff code of conduct. Make it part of the DSL’s briefing to staff and governors.

2. Silence on Deepfakes and AI-Generated Abuse Material

This one’s big. The Internet Watch Foundation reported a 380% increase in AI-generated child sexual abuse material between 2023 and 2024. The NCA and IWF have issued joint warnings. This is not niche. It’s not emerging. It’s now.

Yet KCSIE doesn’t mention deepfakes. It doesn’t clarify that AI-generated Child Sexual Abuse Material (CSAM) is just as illegal as traditional forms. It doesn’t warn DSLs that some of this is happening peer-to-peer in school communities.

Worse still, it misses the opportunity to confront the growing misconception that AI-generated CSAM is somehow “victimless” because it doesn’t involve a direct physical assault. The Marie Collins Foundation has called that belief “both false and deeply damaging.”

KCSIE could simply state:

Creating, possessing or sharing AI-generated sexual images of children is a crime. If this happens in your school, it is a safeguarding incident and a police matter.

That one sentence would close a dangerous gap. Right now, many staff won’t realise what they’re dealing with.

And it’s not just predators. The accessibility of these tools means that harm is now happening between children. The pool of perpetrators now includes peers. A pupil with a grudge and a phone can create and share deepfake abuse material in minutes. It’s already happened in UK schools. The psychological harm is devastating. The legal and safeguarding implications are enormous.

3. No Support on AI-Driven Sextortion and Grooming

AI isn’t just being used to make harmful content. It’s also being used to exploit children directly. Perpetrators are using AI to enhance the reach, speed and realism of their grooming tactics. They can now create convincing fake identities, generate emotionally persuasive language, and even produce false images that are almost impossible for a child to distinguish from reality.

One of the most alarming forms of this is AI-facilitated sextortion. In these cases, a child doesn’t need to have shared anything intimate. A predator might take a photo from a public profile and run it through a nudification app or an AI image generator to create something explicit. That fake image is then used to blackmail the child..

This kind of abuse is happening to children who have never taken or shared an inappropriate photo in their lives. It’s not about poor decisions or “risky” behaviour. It’s about being findable. The barrier to victimisation is now frighteningly low.

The psychological impact is enormous. A child who sees a fake but realistic sexual image of themselves feels the same fear, shame, and panic as if the photo were real. It’s a form of abuse that creates serious harm, even if the image is synthetic.

KCSIE 2025 rightly includes online grooming in its general guidance, but it doesn’t connect this to the role of AI. It doesn’t mention blackmail using fake imagery. It doesn’t equip staff to respond to disclosures where the threat involves something fabricated by a machine.

That’s a problem. Because in the moment, the child won’t care whether it’s AI or not. They’ll just be terrified. And unless staff understand that AI-generated threats are real threats, we risk brushing them off or underestimating their severity.

A few lines would go a long way:

Professionals should be aware that sextortion and grooming may involve AI-generated imagery or interactions, and should respond accordingly. The absence of a real image does not reduce the safeguarding risk.

We don’t need a new manual to act. But we do need permission, clarity, and confidence. Especially when the tools being used to hurt children are this new and this fast-moving.

4. Algorithmic Amplification: Mental Health, Radicalisation, and Misogyny

AI doesn’t just power chatbots and image generators. It also drives the algorithms behind social media feeds, video suggestions and search results. These systems are designed to keep users engaged, not safe, and for children that can lead to real harm. A pupil searching for mental health support might quickly be shown content that promotes disordered eating or self-harm. The more they engage, the worse it gets.

These algorithms also amplify extreme or harmful views. Many schools are reporting a rise in misogynistic attitudes among boys, often linked to influencers promoted by mainstream platforms. These figures use humour and emotional hooks to attract young followers and slowly shift their thinking. What starts as a few “edgy” clips can quickly grow into something much more dangerous.

Because the content is algorithmically served, pupils don’t even need to go looking for it. It finds them. That creates echo chambers where extreme ideas feel normal, and alternative views are filtered out. Over time, this can affect a pupil’s wellbeing, their relationships, and even their behaviour in school.

KCSIE 2025 does mention harmful content and disinformation, but it doesn’t explain how children are being drawn into it. It’s not just about the content itself. It’s about the systems pushing it to them, again and again, in ways that are hard to spot and harder to challenge.

5. EYFS and the Normalisation of Surveillance

We often assume safeguarding risks from AI start when children get phones. Many connected toys and learning apps for young children collect voice recordings, location data and behavioural patterns, often without clear parental consent. This quiet data harvesting is rarely visible but lays the groundwork for lifelong surveillance.

Some AI-enabled toys act as “companions”, chatting with children or responding to commands. While this can seem engaging, there are real concerns about how these interactions affect social-emotional development. Children in this stage are learning empathy, turn-taking and how to read body language. Conversations with AI don’t offer that. They are smooth and one-sided, which can stunt key developmental skills.

This is no longer hypothetical. The toy giant Mattel announced a partnership with OpenAI to build generative AI into its product lines. This means children could soon be playing with AI-driven toys that generate responses, narratives and possibly even synthetic voices in real time. For many children, that will be their first experience of a generative AI system. If those interactions are collecting data or blurring the line between real and artificial relationships, the safeguarding implications are serious.

KCSIE 2025 doesn’t mention any of this. The Early Years section focuses on familiar safeguarding concerns, but not the growing role of datafied play or AI-based tools. If your setting is using connected devices with under-fives, you need to know what data is being collected, whether it complies with the Children’s Code, and how it might shape the habits and attitudes of the children in your care.

6. Staff Conduct Risks Aren’t Acknowledged

Staff use of AI is growing fast. Teachers are using it to plan lessons, draft emails, write reports and more. But without clear boundaries or training, this opens up a new category of safeguarding ris. One that KCSIE 2025 doesn’t touch.

Staff could upload sensitive pupil data into public AI tools without understanding where that information goes or how it might be stored. Others could have used AI tools to generate jokes, character dialogue or even satirical deepfakes, not realising how inappropriate or risky this can become.

But KCSIE doesn’t mention AI in the section on staff behaviour and misconduct.

We need clear reminders that:

  • AI tools fall under data protection and safeguarding law
  • Use of AI to create inappropriate material is misconduct
  • Uploading identifiable pupil data into public AI tools could constitute a breach

Without this, many staff will think, “Well, the guidance doesn’t say we can’t…”

7. KS5 and Legal Transition to Adulthood

By the time pupils reach Key Stage 5, the safeguarding landscape changes. These young people are not just preparing for university or work. They are also crossing into adulthood, both legally and developmentally. Many will turn 18 while still on roll, which means they lose certain child-specific protections and gain full responsibility for their online actions. Students in Key Stage 5 are often navigating:

  • AI-powered university admissions
  • Automated career advice
  • The full weight of adult law, including liability for what they generate and share online

They also lose protections like the Children’s Code when they turn 18. If they’re still in your setting, this is your moment to teach them what it means to be a legally responsible digital citizen.

Beyond conduct, there are also concerns about how AI might influence the future choices of these students. University admissions and job recruitment are starting to rely on AI-powered systems to filter applications. If those systems are trained on biased data, pupils from underrepresented backgrounds may find themselves unfairly filtered out. Many students have no idea these systems exist or how to question them.

KCSIE 2025 offers no guidance on this transition. It doesn’t prompt schools to prepare students for their new legal responsibilities, nor does it mention the growing influence of AI on access to education and work. At this stage, safeguarding is not about protection alone. It’s also about empowerment. Students need to know their rights, understand how their data is being used, and recognise the risks of assuming every AI-driven recommendation is neutral or fair.

AI as Footnote, Not Infrastructure

At the moment, KCSIE treats AI as a small thread within the wider picture of online safety. But AI isn’t just another app to manage. It is quickly becoming the underlying system that powers how online harm is created, shared and experienced.

Deepfakes, sextortion, grooming, blackmail, misinformation, and mental health risks are not new. What has changed is the speed, scale and realism with which AI enables them. It doesn’t replace our existing concerns. It multiplies them.

This is not about future risks or hypothetical scenarios. It is about a shift in how children are targeted, influenced and harmed right now.

That’s why safeguarding leaders need more than signposts.

So What Can Schools Do Now?

Here’s what I recommend to fill in the blanks:

Develop a Living AI Policy

Link it to your child protection, IT, behaviour and data policies. Reveiw it termly and update as necessary. Define:

  • What tools are allowed
  • How they can be used
  • What counts as misuse
  • What to do if something goes wrong

Train Staff on AI Risks

Don’t just do a one-off inset. Build it into DSL refreshers, staff CPD, and digital safeguarding briefings. Topics to cover:

  • AI-CSAM and deepfakes
  • Sextortion involving fake images
  • Uploading data into AI tools
  • Misuse of AI by staff or pupils

Talk to Pupils About Ethical AI Use

This isn’t just about preventing harm. It’s about:

  • Questioning AI output
  • Knowing when something’s biased
  • Recognising manipulation
  • Spotting when AI is being used to deceive or coerce

Engage Parents

Most are unaware of the speed or scale of AI risk. Hold workshops. Share example cases (anonymised). Offer practical tips. Frame it as: we want to help, not panic.

Review Tools Through a Safeguarding Lens

Before introducing any AI tool:

  • Ask what data it collects
  • Ask how it’s trained
  • Check age ratings and default settings
  • Run a DPIA (Data Protection Impact Assessment) if relevant

Don’t Wait for Permission

KCSIE 2025 might not tell you to do all of this. But it also doesn’t say you can’t.

And if you’re here, you’re probably already doing the heavy lifting. This blog isn’t here to criticise your work. It’s here to back it up.

If you want editable policy templates, CPD slides, or a short staff briefing doc on AI safeguarding, I’ve got you. I’ll be sharing everything I’ve created for the start of the next academic year.

No cost. No gatekeeping.

The harm is already happening. And waiting for guidance to catch up isn’t a luxury schools have anymore.


References

  1. National Crime Agency (2024). Joint Warning on the Rise of AI-Generated Child Sexual Abuse Material (CSAM).
  2. Internet Watch Foundation (IWF) (2024). Annual Report on Online Child Sexual Abuse Material: AI-CSAM Surge.
  3. NSPCC (2025). The Dangers of AI-Enhanced Grooming and Sextortion.
  4. UK Council for Internet Safety (UKCIS) (2025). Online Safety and the Rise of AI-Generated Content.
  5. Children’s Commissioner for England (2025). Addressing the Emerging Threats of AI in Child Protection.
  6. Ofcom (2025). Online Safety Act: Protection of Children Codes of Practice.
  7. Information Commissioner’s Office (ICO) (2025). Age Appropriate Design Code (Children’s Code).


Sherene Sasser, M.A.

Innovative Education Leader | Inclusive Workforce & Career Development Strategist | Championing the Preparation of Today’s Students for Tomorrow’s Careers | Driving AI & Emerging Tech Integration | Empowering Communities

1mo

This was very helpful. It gave me some great talking points for the district I work for. Thank you for reading the whole KCSIE 2025 document and giving us the Cliff Notes version!

Like
Reply
Mark Nichols

Assistant Principal & English Subject Lead @ Avanti Grange Secondary School | Leading Teaching & Learning | Championing Innovation, Creativity & Student Success

2mo

Matthew Wemyss - I do hope that Safeguarding Leads out there are prepping themselves for the storm that AI is set to unleash upon them. We aren’t even knee deep in the transformational shift that AI will bring to society and already feeing the harm placed on young people. I would argue that future DSLs not only need to be versed in the standard safeguarding practice, but should be well versed in future thinking and strategic leadership of AI in the sector - if not then harm is likely to occur under their watchful eyes.

Jeff Howson

Education Product Owner at A-dapt

2mo

This is excellent work Matt.

Trudi Barrow

Design Education | CLEAPSS Adviser | AI in Education | FRSA | NPQLTD

2mo

This is an excellent summary and much needed. I hope this is read widely.

To view or add a comment, sign in

Others also viewed

Explore content categories