
There’s a quiet risk building in the rush to apply AI across public services that we’re designing for systems, not people. That risk shows up in everything from chatbot frustration to flawed eligibility models. It’s the difference between a technically accurate system and a human-centred one – between a process that works on paper, and one that genuinely works for citizens.
Public sector AI must be explicitly designed around citizen needs. That means thinking beyond data science to tackle the policy, service design and trust issues that AI brings. It’s the only way these tools will deliver equitable, usable outcomes at scale (and help realise the estimated billions in productivity savings).
What is citizen-centred AI?
It’s easy to say “put the user at the heart”, but harder to define what that looks like when your “user” is the public, in all of its complexity.
Our current definition of citizen-centred AI is:
- Designing systems around real-world behaviours and needs, not just process maps
- Making sure AI supports, not replaces, meaningful interactions
- Ensuring systems are:
- Accessible and inclusive
- Resilient to misuse or misunderstanding
- Building feedback loops that let people challenge or improve the output
- Providing transparency in ways people can actually understand (not just legal T&Cs or ethics PDFs)
Citizen-centred AI is essential for public legitimacy. When people don’t trust the AI embedded in a service – or worse, don’t know it’s there – you undermine both effectiveness and democratic accountability.
The trouble with “policy in, AI out”
One reason public sector AI risks drifting from citizen needs is the traditional divide between policy and delivery.
Take a triage model for support services. Designers might build it to reflect policy priorities such as urgency, risk, and likelihood of engagement. But what happens if the data feeding that model doesn’t reflect lived experience? Or if the outputs push users down generic pathways that ignore cultural or contextual factors?
Without meaningful user insight – gathered through research, tested through prototypes, and validated through service – even a well-governed model can cause harm. Citizen-centred AI closes that gap between intent and impact.
And yes, this means uncomfortable conversations. Sometimes what we learn from users contradicts long-held policy assumptions. But surfacing those tensions is the point. Good design work helps policymakers refine, not just deliver.

Case examples: the good, the bad, and the quietly harmful
We’ve seen a range of examples in recent years that highlight the gap between AI capability and real-world delivery:
- The good: Some local authorities have quietly embedded AI in triaging housing repair requests, reducing backlog and surfacing urgent needs faster – all while keeping human override in place. This success came from co-designing with housing officers and tenants from day one.
- The bad: A well-publicised example from the Netherlands involved an algorithm used for fraud detection in benefits claims. Despite being technically advanced, the system disproportionately flagged people from a migrant background, leading to political scandal and real-life hardship. The problem wasn’t just the model, it was a lack of oversight, transparency, and redress.
- The quietly harmful: We’ve encountered chatbots that fail for people with dyslexia, voice assistants that don’t recognise regional accents, and document scanning tools that reject handwritten forms from older users. These aren’t malicious, but they’re exclusionary nonetheless.
These examples show why teams can’t leave “citizen-centred AI” to chance or assume it will emerge through iteration. They must design it intentionally from the start.
Three principles for designing AI that works for everyone
So what does citizen-centred AI look like in delivery? From our work with digital and policy teams across government, we’ve seen three principles make a real difference:
1. Design for scepticism, not just efficiency
Assume your users will doubt, misinterpret, or even resist AI decisions – and build accordingly.
That means:
- Designing explainability into interfaces: “Why was I matched to this outcome?”
- Providing fallbacks or human channels for edge cases or uncertainty
- Avoiding black-box decision flows that confuse users or create learned helplessness
Remember: public services aren’t e-commerce. If an AI model suggests the wrong coat size, it’s annoying. If it denies access to childcare support, it’s life-changing.
2. Test with the margins, not just the middle
Inclusion should be built into how we design, not just measured as a KPI after the fact.
Make sure your testing includes:
- People with low digital confidence or literacy.
- Those using assistive technologies.
- People from different cultural, linguistic or regional backgrounds.
- Individuals with previous negative experiences of state systems.
If your AI-enabled service only works for the digitally confident middle, it’s not fit for the public sector.
3. Make feedback loops visible and useful
One of the promises of AI is that it can learn. But in public services, that promise often gets lost in deployment.
To keep systems aligned with real user needs, we should:
- Let users flag if an output feels wrong or confusing
- Capture that input in a way that feeds improvement – not just a dead-end form
- Give users confidence that the system is monitored and accountable
- Adopt a deliberate and patient iterative test and learn approach vs. the instant world of AI
Public sector AI should feel like something done with people, not to them.

Who owns citizen-centred AI?
Ownership is where it gets tricky, and where transformation efforts often stall.
AI initiatives typically involve:
- Policy teams, focused on outcomes and fairness
- Data scientists, focused on model performance
- Service designers, focused on usability and access
- Delivery teams, focused on implementation and deadlines
Without shared ownership, the “citizen” element falls between the cracks.
We’ve found the best results come when there’s a dedicated role or team responsible for AI experience – someone who can bridge the technical and human dimensions and has permission to slow things down when design issues arise.
It’s not always a full-time job. But someone needs to hold the citizen lens in every sprint.
Trust is a design output
Public trust in government is hard-won and easily lost. The next decade will see more AI embedded into public services – not just in back-office optimisation, but in front-line experiences.
We mustn’t assume that accuracy equals acceptability. We mustn’t let technical excitement overshadow human realities. And we mustn’t build systems people don’t understand, can’t challenge, or feel alienated from.
Citizen-centred AI is the only kind of AI the public sector should build.
And if we get it right, we don’t just get better services. We get stronger institutions, greater legitimacy, and a more inclusive digital future