About Guardian
Guardian was created to address a growing risk inside regulated organisations - one that most settings are already exposed to, but cannot currently see or control.
As AI tools like ChatGPT become part of everyday workflows, staff are increasingly using them to save time, write observations, draft communications, and support decision-making.
The problem is not that this is happening.
The problem is that it is happening without oversight.
The Reality
(Why Guardian exists)
In environments where safeguarding, GDPR, and accountability are critical, this creates a serious and often overlooked risk.
Staff may — often with good intentions — input:
Names
Development information
Behaviour notes
Sensitive or identifiable data
into external AI systems.
In most organisations:
There are no clear boundaries
There is no visibility
There is no audit trail
Which means:
You don’t know what’s being shared.
You can’t track it.
You can’t prove control.
The Hard Truth
Many organisations rely on policies and best practice to manage risk.
But policies do not prevent behaviour.
They only define expectations.
The reason disciplinary procedures exist is because:
People do not always follow policy.
And when it comes to AI — where usage is fast, informal, and often unseen — the gap between policy and reality becomes even greater.
This is where the real risk sits.
Why This Matters
If a member of staff shares sensitive information through AI:
It may become a GDPR breach
It may raise safeguarding concerns
It may be questioned by parents, regulators, or inspectors
And without visibility or records:
You are still fully accountable — but unable to evidence control.
Why Guardian Exists
Guardian was created from direct experience inside a regulated environment, where accountability sits with leadership - regardless of where or how an issue occurs.
It became clear that while AI adoption was increasing rapidly, the systems to manage it simply did not exist.
Guardian was built to close that gap.
Not by restricting AI — but by bringing structure, visibility, and accountability to how it is used.
What Guardian Does
Guardian gives organisations a practical system to:
Set clear and enforceable boundaries for AI use
Define what can and cannot be shared
Create visibility over how AI is being used by staff
Maintain accountability and evidence if questioned
It turns something that is currently invisible into something controlled.
AI is already being used inside your organisation.
That is not a future risk.
It is a current reality.
The question is not whether it’s happening,
It’s whether you have control over it.
Safeguarding, GDPR, and accountability cannot rely on assumption.
They require visibility, structure, and control.
Guardian ensures that AI usage within your organisation meets that standard.
Want to see how Guardian works in your setting?