The problem
Most schools have an AI policy. Few have AI visibility.
By 2026 the risk profile has moved. Tools that were safe last term may have silently updated their data-sharing terms. Pupils are now routinely using agentic AI that can bypass keyword-based web filters. And inspectors are starting to ask what’s in the gap.
Model Drift
Vendors quietly change data-handling terms over the holidays. What you approved in September may no longer be safe by March.
Agentic AI
Pupils are using AI agents that perform multi-step browsing tasks — bypassing traditional keyword web filters entirely.
IP Leakage
Staff-authored lessons, EHCPs and pupil work quietly harvested into free-tier model training sets. Once it’s in, it doesn’t come out.
Compliance
Two regulatory worlds, one audit
Standard IT providers read the guidance. We treat AI as both a regulated product and a pedagogical tool — because OPSS and the DfE both apply to you.
Product Safety — OPSS
Generative AI as a high-risk product
The Office for Product Safety and Standards expects schools to verify “Safety by Design” before deployment. As a distributor and user of these products, your duty of care now extends well beyond basic filtering.
Educational Standards — DfE
Human-in-the-loop, by default
DfE guidance mandates strict human-in-the-loop protocols, rigorous pupil data protection, and active protection against AI-generated misinformation in teaching materials.
Our audit bridges the gap between these national standards and daily classroom practice — with technical evidence, not hand-waving.
The framework
Four pillars. One report. Zero surprises.
Technical Discovery
A deep scan of network traffic and endpoint behaviour to surface Shadow AI — the unmanaged assistants, agentic tools and bypass sites being used off-policy by staff and pupils.
Adversarial Red-Teaming
We stress-test your web filters and school-sanctioned AI against the prompt-injection and jailbreak techniques we see pupils share. If the guardrail breaks, we want to be the ones who find it.
Regulatory Mapping
A line-by-line compliance check against the 2026 OPSS Generative AI product safety standards, the DfE Generative AI in Education guidance, UK GDPR, and KCSIE.
IP & Data Audit
Verification that staff-authored resources and pupil coursework are not being silently ingested into third-party model training pipelines by free-tier consumer AI.
The journey
From kickoff to board briefing in ~5 weeks
- Phase A
Weeks 1–2
Discovery & Usage Mapping
Shadow AI network scan, anonymised pupil & staff surveys, and focus groups with Heads of Department to see how AI is actually altering pedagogy.
- Phase B
Week 3
Policy & Data Governance
GDPR deep-dive, AUP alignment against current DfE guidance, and OPSS 'Safety by Design' vetting of the full edtech stack.
- Phase C
Week 4
Technical Risk Testing
Guardrail efficacy testing using documented bypass techniques, hallucination assessment of AI-generated curriculum materials, and agentic-tool perimeter review.
- Phase D
Week 5
Strategic Remediation
Delivery of the 10,000-word report, risk matrix, DfE vs Practice tracker, and a 0–30 day / 1–6 month / 6–12 month roadmap. Optional Board briefing.
What the report covers
A risk matrix your governors can actually read
Every audit delivers a quantitative risk matrix alongside a line-by-line DfE vs Practice tracker. This is a preview of the categories we benchmark.
| Risk category | Requirement | Audit check |
|---|---|---|
| PII Exposure | Zero identifiable pupil or staff data in public LLMs | Scan for staff use of personal ChatGPT, Gemini & Claude accounts |
| Safeguarding | Filters prevent exposure to harmful or generative content | Active prompt-injection testing against school web filters |
| Academic Integrity | Assessment models resilient to current AI capability | Vulnerability analysis of homework and coursework structures |
| Model Drift | Vendor T&C changes don't silently break compliance | Quarterly model-update monitoring against approved edtech stack |
| IP Leakage | School-authored content not used for external training | Review of vendor contracts and data-residency guarantees |
Pricing
Three guide prices. One conversation.
The figures below are guide prices for a typical engagement at each tier. We’re happy to scope a package that genuinely fits your context — budgets, size, and what you already have in place all shape the final fee. Every audit includes a signed Certificate of AI Compliance for inspection preparation.
Guide prices · talk to us about what’s right for your school.
Tier 1
Diagnostic
£1,650 + VAT
Best for: Primary schools
- Remote policy review
- Anonymised staff & pupil usage survey
- DfE alignment gap analysis
- Written risk summary + top-10 remediation actions
Tier 2
Professional
£3,250 + VAT
Best for: Secondary & prep schools
- Everything in Diagnostic
- On-site focus groups with SLT, DSL, IT Lead
- Active filter stress-testing & guardrail bypass attempts
- Full 10,000-word AI Safety & Compliance Report
- Signed Certificate of AI Compliance
Tier 3
Strategic
from £8,000 + VAT
Best for: Multi-Academy Trusts
- Everything in Professional — applied at trust level
- Representative sample of trust schools
- Trust-wide CPD roadmap
- Board of Trustees briefing & Q&A session
- Quarterly Model Drift monitoring (12 months)
Budget tight, or scope smaller? Let’s talk. We routinely blend tiers — a Diagnostic plus a single on-site day, or Professional scoped to the parts of the estate that actually matter — and we’ll scope a fee that fits.
Why EddyAI
We don’t read the guidance. We build the technology.
EddyAI is the education arm of Pilot Generative AI — a UK company building AI products used in schools and workplaces every day. Our auditors understand the underlying architecture of the models your pupils are prompting, because we ship LLM-based systems ourselves. That’s a level of depth standard IT providers can’t match.
Operators, not just consultants
We run production AI in education and workplace settings under real privacy constraints.
Cyber Essentials certified
Your data is handled to UK government-certified security standards throughout.
UK Government supported
Our education work is supported by the UK Shared Prosperity Fund.
The deliverable
A 10,000-word AI Safety & Compliance Report
- I.Risk Matrix — quantitative Probability × Impact heat map across privacy, academic integrity and safeguarding.
- II.DfE vs Practice Tracker — line-by-line Compliant / Partial / Non-Compliant scoring.
- III.Remediation Roadmap — 0–30 day stop-gaps, 1–6 month policy updates, 6–12 month walled-garden transition.
- IV.Certificate of AI Compliance — signed, inspection-ready.
FAQ
Before you get in touch
Get started
Request your audit
Tell us a little about your school or trust. We’ll reply within one UK business day to arrange a free 30-minute scoping call and send a tailored fixed-fee proposal.
- One business day response
- Free 30-minute scoping call, no commitment
- Fixed-fee proposal within 48 hours
Ensure your school or trust is 2026-ready.
The pre-September audit window is the one everyone wants. Slots are finite.