I use AI tools every day. I build school software with them, I write code with them, and I’d genuinely struggle to go back to working without them. This isn’t a post about why schools should avoid AI. It’s about something more specific: when an AI tool does something wrong, who’s responsible?

The answer, in every case, is you. Not the AI company. Not the algorithm. The school that chose to deploy the tool, gave it access, and let it run.

Two recent experiences of my own made this uncomfortably clear.

The Claude AI interface from Anthropic, one of the AI tools used in the experiences described in this post

When AI decides it knows better

I was building a knowledgebase application and had chosen a specific open-source project as the foundation. I gave Claude Code clear instructions: build the framework using this project. The AI asked a few clarifying questions, I answered them, and it went ahead.

When I came back to check the results, the application was complete and working. But not on the platform I’d asked for. The AI had assessed my chosen project, decided on its own that it didn’t meet the requirements, selected a different open-source knowledgebase, and built the entire application on top of that instead.

The result actually worked better than my original choice. In this instance, the AI’s judgement was arguably correct. But it made a significant architectural decision without asking me. It didn’t flag a concern and wait for direction. It just went ahead.

Now imagine this in a school context. An AI tool tasked with setting up a new data integration decides that the school’s chosen platform doesn’t meet requirements and substitutes a different one. Maybe it’s better. Maybe it stores data in a different jurisdiction. Maybe it doesn’t meet the school’s compliance requirements. The AI won’t know or care about those constraints unless someone explicitly told it. And the school is left with a system it didn’t ask for.

When AI destroys your data

The second experience was worse. I was building a web service on top of a PostgreSQL database. While generating seed data for testing, the AI decided, for reasons I still can’t fully explain, to drop the entire database. Every table, every record, gone.

It was development data. I could rebuild it. But the same tool, with the same level of access, connected to a production database, would have destroyed real data. There was no confirmation prompt, no warning, no “are you sure?” The AI had write access to the database, and it used that access in a way that was destructive.

If that had been a school’s attendance records, safeguarding logs, or assessment data, the school would be the one answering questions from the ICO. Not Anthropic. Not OpenAI. Not Microsoft. The school.

Accountability doesn’t transfer to the AI vendor

This isn’t a grey area. AI companies are explicit about it in their terms of service. The output of the tool is the user’s responsibility. If the AI hallucinates data, produces incorrect reports, sends the wrong communication, or deletes records, the liability sits with whoever deployed it.

For schools, that means the headteacher, the governing body, and the designated data controller. This isn’t unfair or unusual. It’s how every tool works. A school that deploys a misconfigured email system and accidentally sends pupil data to the wrong distribution list can’t blame Microsoft for the breach. The school configured it, the school deployed it, and the school is accountable for the result.

AI is no different, except that AI tools are more likely to behave unpredictably. A spreadsheet formula does the same thing every time you run it. An AI tool might produce different output from the same input, might make decisions you didn’t anticipate, and might take actions you didn’t explicitly authorise. That unpredictability makes the accountability question more urgent, not less.

What this looks like in practice

Schools are beginning to connect AI tools to real systems: MIS platforms, communication tools, attendance records, reporting dashboards. Each connection is a capability you’re granting, and each combination of capabilities creates its own risk profile.

MIS data plus email access. An AI tool that can read personal pupil data from the MIS and also send emails to parents could, in theory, send incorrect or sensitive information to the wrong recipients. A hallucinated attendance alert or a behaviour report sent to the wrong family isn’t the AI’s mistake. The school sent those emails.

Attendance records plus reporting. An AI with write access to attendance data and the ability to generate official reports could alter records that end up in documentation submitted to the local authority or Ofsted. If the data is wrong, the school submitted those reports.

Safeguarding records plus internet connectivity. An AI tool with access to safeguarding logs and an outbound internet connection has, at least in principle, the technical ability to transmit sensitive data externally. The school allowed that combination of access.

Staff timetabling plus HR records. An AI managing staff schedules that also has access to HR data (salary information, absence records, disciplinary notes) could inadvertently expose that information through a timetabling report or a shared calendar entry. The school granted those permissions.

None of these scenarios require malicious intent. They require only a tool that has more access than it needs and makes a mistake.

Least privilege isn’t just for staff accounts

Schools already understand this concept for human users. A teaching assistant doesn’t get the same system access as the network manager. A subject teacher doesn’t have write access to the entire MIS. Permissions are scoped to the role, and broadened only when there’s a clear need.

The same thinking should apply to AI tools. Before deploying any AI system, it’s reasonable to ask: what data can this tool access? What actions can it take? What happens if it makes a mistake with the maximum scope of its permissions? If the worst-case scenario is unacceptable, the permissions are too broad.

I’ve written previously about approaches to AI and school data that reduce exposure by design. KCSIE 2026 is also starting to address the regulatory expectations around AI in schools. Both of those conversations connect directly to this one: the more access you give an AI tool, the more accountability you’re taking on.

I’ll keep using AI, but with my eyes open

I use AI tools every day, and the productivity gains are real. I’m not arguing that schools should avoid them. I’m arguing that schools should deploy them knowing that every action the tool takes, every output it produces, and every error it makes belongs to the school.

The AI won’t be held accountable. The AI company’s terms of service make that clear. The school will. That’s not a reason to stop. It’s a reason to think carefully about what you’re giving these tools access to, and whether the person approving that access understands what they’re signing up for.