I’ve had “KCSIE and AI: what the next update should say” on my list of things to write about for months. The current guidance mentions online safety in broad terms, but it has never dealt directly with the specific risks that generative AI creates for children. That gap has left schools guessing about how to handle AI-generated imagery, chatbots that simulate human interaction, and the safeguarding implications of tools that can produce realistic content on demand.

The draft KCSIE 2026, published on 12 February with a consultation period running until 22 April, closes most of those gaps. If this draft becomes policy in September, it will be the first version of KCSIE that treats AI as a mainstream safeguarding concern rather than a footnote.

The GOV.UK consultation page for Keeping Children Safe in Education proposed revisions 2026, published by the Department for Education on 12 February 2026

What does KCSIE 2026 say about AI deepfakes?

The most significant change is in the definition of child-on-child abuse. The draft explicitly includes “self-generated intimate images and/or videos including those generated using AI e.g. deepfakes” (Part 1, paragraph 34). Previous versions used terms like “indecent,” “nude, semi-nude,” and “sexting” without addressing AI-generated content at all.

This matters because schools have been stuck in a grey area. A pupil creates a deepfake intimate image of a classmate using a freely available app. Is that a safeguarding incident under the existing framework? The current KCSIE doesn’t clearly say so. The 2026 draft removes that ambiguity. Schools must now recognise these harms, respond consistently, support victims, and manage the peer-on-peer dynamics that come with AI tools being accessible to any child with a phone.

For anyone who works with young people, this is overdue. The technology to generate convincing fake imagery has been widely available for over a year, and reports of AI-generated abuse material have been increasing. The guidance needed to catch up.

How does the draft treat generative AI as a contact risk?

This is the change that caught my attention most. KCSIE 2026 explicitly includes “generative AI applications that simulate [harmful online interaction]” under contact risk categories. That means AI chatbots, companion apps, roleplay bots, anonymous chat-style tools, and AI that imitates a real person are now treated as safeguarding concerns in the same category as contact from a stranger online.

9ine's analysis of the KCSIE 2026 draft highlighting new AI safeguarding obligations for schools, published February 2026

The distinction matters. Previous guidance framed online risk primarily as content (what a child sees) and contact (who a child interacts with). Generative AI blurs that line. A child having a disturbing conversation with an AI companion isn’t being contacted by a person, but the psychological impact can be similar. By placing generative AI within the contact risk framework, the DfE is signalling that schools need to think about AI interactions with the same seriousness as they’d think about an unknown adult messaging a pupil.

Two new paragraphs have also been added to help schools understand safety considerations and legal responsibilities when choosing to use generative AI themselves. As 9ine’s analysis of the draft points out, this shifts AI from being a “niche add-on” concern to something embedded directly in the core safeguarding vocabulary.

Filtering, monitoring, and cyber security changes

The draft tightens requirements around filtering and monitoring. Schools must now review the effectiveness of their filtering and monitoring systems “at least once every academic year” (Part 1, paragraph 166). Previous guidance recommended regular reviews but didn’t specify a minimum frequency. That annual review must be documented, and governing bodies need to maintain records.

More significantly, cyber security has been elevated from an IT task to a core safeguarding responsibility. Paragraph 170 of the draft makes compromised safeguarding records and child data an immediate safeguarding concern, not just a data breach to report. If a school’s systems are compromised and pupil records are exposed, that’s now explicitly a child protection issue. Schools are expected to align their cyber security measures to DfE cyber security standards.

For school IT teams and the vendors who serve them, this is a meaningful shift. I wrote recently about keeping pupil data out of AI systems entirely as both a data protection measure and a reliability one. The KCSIE 2026 draft reinforces why that architectural decision matters. If the AI never touches the data, a compromised AI tool doesn’t become a safeguarding incident.

What about mobile phones?

The draft pushes schools toward being “phone-free by default,” with exceptions only. Pupils should not have access to phones throughout the school day, including break times and lunch. This isn’t entirely new (many schools already restrict phones) but codifying it in statutory guidance gives schools a clearer mandate.

From an AI perspective, this is practical. Many of the AI tools that create safeguarding risks (image generators, companion chatbots, deepfake apps) are accessed on personal devices outside the school’s filtering and monitoring systems. A phone-free school removes the most common vector for unmonitored AI use during the school day.

What this means for anyone building AI for schools

I’m reading this draft from two perspectives: as someone who thinks about school safeguarding, and as someone building an AI product for schools at Ask.School. The tightening of KCSIE around AI is, on balance, a good thing for the sector.

Clearer expectations mean schools can ask better questions of their vendors. “How does your product handle the KCSIE 2026 requirements around generative AI?” is a more useful procurement question than “Is your AI safe?” The Generative AI Product Safety Standards published by the DfE already provide a framework for this, and the new KCSIE draft creates the statutory backbone that makes those standards matter in practice.

The consultation runs until 22 April 2026. If you work in a school or build products for schools, it is worth reading the draft and responding. The AI provisions are largely sensible, but the detail of implementation will matter enormously. How schools are expected to monitor AI use on personal devices outside school hours, what “adequate” filtering looks like for AI-generated content, and how DSLs should handle AI-specific incidents are all areas where the final guidance could be stronger.

September is coming fast. Schools that start thinking about these changes now, rather than waiting for the final version, will be better prepared when it lands.