Artificial Intelligence Transparency Statement
This statement explains how we use Artificial Intelligence (AI), how we keep it safe, and how we stay accountable to the community. We are committed to fairness, independence, and transparency.
How we use AI today
We currently use AI in the following ways to make our services clearer and more efficient:
- Complaint submissions – AI helps improve the clarity of complaints submitted via our online complaint form by prompting complainants with follow-up questions and suggesting improvements to the information they provide where it is ambiguous or unclear.
- Knowledge base – AI tools help our staff find information quickly, so advice is consistent and timely.
- Workplace productivity – AI tools help streamline our processes and assist with timely communications with our stakeholders, with outputs subject to human review.
How we may use AI in the future
We are exploring additional ways that AI could support our work. This may include helping staff manage routine tasks, improving how we analyse trends and systemic issues, and enhancing internal functions such as training and quality assurance. We may develop AI tools to assist with these functions. Any future use of AI will always be introduced carefully, with human oversight and safeguards in place, and in line with our commitment to fairness, independence, and transparency.
What we do not use AI for
Some things are off limits. We will never delegate key responsibilities to AI:
- We do not use AI to make decisions about complaints.
- All determinations and outcomes are made by people.
How we manage risks
Our use of AI is carefully governed with a strong focus on protecting fairness, independence, and public trust:
- Formal AI Policy – All AI use is governed by our Artificial Intelligence Policy, approved by the Board and reviewed regularly.
- Review and register – We assess the suitability of an AI tool before we approve it for internal use by our staff, and maintain an internal register of all AI tools in use.
- Risk management – AI-related risks (such as bias, privacy, or reputational risks) are tracked in our risk register and reviewed regularly.
- Privacy and security – A Privacy Impact Assessment is completed before any new AI tool is introduced.
- Monitoring – AI outputs are checked for accuracy and bias, and systems are regularly reviewed.
- Human in the loop – Staff remain accountable for all AI inputs and outputs.
Our comittment
Our commitment is clear: AI will only ever be used to strengthen, not replace, the fairness, accessibility, and independence of the Ombudsman scheme. We will review our use of AI regularly and may update this statement at any time in our discretion.