Your Questions, Answered

  • Guardian is a system designed to help organisations manage and control how AI tools like ChatGPT are used by staff.

    It provides structure, visibility, and accountability — ensuring AI is used safely, responsibly, and in line with your safeguarding and GDPR obligations.

  • AI is already being used by staff in many organisations — often informally and without oversight.

    Without a system in place:

    • You cannot see what is being shared

    • You cannot track how AI is being used

    • You cannot evidence control if questioned

    Guardian gives you that control — before it becomes a problem.

  • Policies define expectations — but they do not prevent behaviour.

    The reason disciplinary procedures exist in any organisation is because policies can be misunderstood, ignored, or bypassed.

    With AI, usage is fast, informal, and often unseen.

    Guardian goes beyond policy by giving you visibility and structure around what is actually happening.

  • Guardian helps reduce risks including:

    • GDPR breaches through sharing identifiable or sensitive data

    • Safeguarding concerns linked to inappropriate data handling

    • Lack of accountability during inspections or audits

    • Reputational damage from misuse of AI

    It ensures you are not relying on assumption — but operating with control.

  • In most cases, AI is already being used informally — even if it hasn’t been formally introduced.

    Guardian ensures you are prepared, rather than reacting later.

  • In many cases, yes.

    Staff are increasingly using AI to:

    • Write observations

    • Support planning

    • Draft communications

    This is often done with good intentions — but without clear guidance or oversight.

    The risk is not misuse.
    It is lack of structure.

  • No.

    Guardian is not designed to prevent AI use.

    It is designed to ensure AI is used safely, appropriately, and within clear boundaries.

    AI will continue to be used — Guardian ensures you are in control of how.

  • Guardian provides a structured framework that allows you to:

    • Set clear rules for staff

    • Define what information can and cannot be shared

    • Create visibility around usage

    • Maintain accountability if anything is questioned

    It turns something informal and untracked into something controlled and manageable.

  • Guardian is designed for organisations operating in regulated environments where data protection, safeguarding, and accountability are critical.

    This includes:

    • Early Years & Education

    • Healthcare

    • Financial Services

    • Legal & Professional Services

  • AI usage will continue — but without structure or oversight.

    This means:

    • Risk remains hidden

    • Responsibility still sits with you

    • You may not be able to evidence control if challenged

    The issue is not whether something will happen —
    it’s whether you are prepared if it does.

  • No.

    Guardian is designed to be simple, practical, and easy to introduce into existing workflows.

    It works alongside your current policies and processes — strengthening them rather than replacing them.

  • Policies and training set expectations.

    Guardian provides:

    • Structure

    • Visibility

    • Ongoing control

    It bridges the gap between what should happen and what actually happens day-to-day.

  • Yes.

    Guardian helps demonstrate that you:

    • Have clear processes in place

    • Understand the risks of AI usage

    • Are actively managing and controlling it

    This supports accountability and strengthens your position if questioned.

  • AI is already being used in day-to-day operations across many organisations.

    Regulation and expectations are catching up quickly.

    The risk is not future-based — it exists now.

    Guardian ensures you are ahead of that curve.

  • You can request access or book a call to see how Guardian would work in your organisation.