Skip to content
Home>All Insights>What does the AI Exemplar programme mean for the future of the Service Standard? 

What does the AI Exemplar programme mean for the future of the Service Standard? 

The UK government’s new AI Exemplar programme signals a formal intent to experiment with citizen-centred AI. It’s recognition that AI is a genuine enabler of change that can improve citizens’ lives where it matters most. Beyond the excitement, however, lies an important question: what does this mean for the globally lauded Service Standard, which has set the benchmark for safely and efficiently delivering digital services worldwide?  

For the Exemplar programme to be successful, there’s an excellent opportunity to evolve the Service Standard. AI brings characteristics that the current framework cannot fully address: unpredictability, bias risks, accountability gaps, and the tension between personalisation and impartiality. Without adapting how we assess, deliver and govern public services, exemplars risk being safe pilots that fall short of their true potential.  

From easy problems to hard ones  

The government’s first wave of digital transformation tackled bounded, transactional services: registering to vote, applying for a passport, renewing a driving licence. These were deterministic problems where outcomes could be clearly defined and tested.  

Today, citizens’ expectations have shifted. In the past decade, public service satisfaction has decreased from 79% to 69%, with over two-thirds of the top 75 government services lagging behind private-sector benchmarks. 

With the increasing popularity of AI tools such as ChatGPT, which are transforming how people search and consume information, the refrain “why haven’t the government solved this with AI yet?” will soon be common.  

We’ve made significant progress on the more foundational wins. The more complex problems are structural: fragmented services, duplicated processes, and citizens forced to “tell us once” but in practice repeat the exact details across multiple departments.  

AI’s potential lies not in digitising forms but in enabling joined-up data, responsive services, and information that adapts to people’s specific circumstances. Exemplars must therefore go beyond technical demonstrations and tackle systemic complexity at scale, not just save teachers a few minutes of lesson planning (as much as a godsend that will be).  

A close-up of a computer screen displaying the ChatGPT interface, representing artificial intelligence tools and their growing role in public service delivery.

The messy middle  

The Exemplar programme reflects an all-too-familiar story. Senior leadership pushes adoption, and frontline practitioners are eager to experiment. But the middle layer of management often stalls progress, fearful of reputational damage, regulatory breaches, and the infamous “Daily Mail test.”  

Exemplars are meant to cut through this by providing safe contexts with Cabinet Office backing. Their real role is cultural: creating confidence that AI can be applied responsibly, without careers ending on the front page.  

Learning, we must remind ourselves, is an outcome in and of itself. If exemplars fail to unblock this messy middle, or be allowed to safely untangle it, they will not scale or pave the way for similar projects to tackle different challenges across government.  

Why the Service Standard matters  

Since 2013, the Service Standard has been the guiding framework for UK digital services. It aimed to codify best practice for service delivery: understand users and their needs, iterate based on evidence, and ultimately deliver services that are reliable, inclusive, and accountable. It made the UK a global reference point.  

But the Service Standard was conceived for deterministic systems. It assumes services can be controlled and repeated with certainty. AI, as we know, does not behave that way.  

An older couple sitting together at a table using a laptop, symbolising accessibility, inclusion, and citizen interaction with digital public services.

The reliability challenge  

One of the Service Standard’s core requirements is to “operate a reliable service.” But with AI, reliability is harder to define. AI systems, particularly those powered by large language models, will never be completely free of hallucinations or bias. Even as accuracy improves, uncertainty is inherent.  

This risk creates tension: can a service that may occasionally provide false information ever pass the reliability test under the current framework?  

Towards an AI Age Service Standard  

If exemplars are to have lasting value, the Service Standard itself must adapt. The answer is not abandoning the framework but extending it with AI-specific principles, grounded in citizen-centred design. For example:  

  • Transparency and auditability: AI services must capture enough context for review and accountability, without breaching privacy.  
  • Human-in-the-loop safeguards: in sensitive contexts such as health, justice or safeguarding, AI should support professionals, not replace them. This measure is tricky as it adds ongoing costs to service delivery.  
  • Objective, factual, and authoritative information: exploring how to negotiate current guidance that content should not influence or interpret information on behalf of users.  
  • Clear accountability: citizens must know when they are engaging with AI, and who is ultimately responsible.  
  • Citizen-centred outcomes: services must be judged not only on efficiency but on whether they increase trust, reduce burden, and deliver real-world improvements for people.  

These are all prerequisites for trust. Without them, exemplars risk being curiosities rather than templates for adoption.  

Personalisation vs. public trust  

From a citizen-facing perspective, AI’s greatest promise is personalisation: services that adapt to citizens’ context rather than forcing them through flat content or standardised forms. An AI-enabled Gov.uk chatbot could guide a person across multiple departments seamlessly.  

But personalisation amplifies risk. The more flexible the interaction, the greater the potential for error or bias. The more data required, the greater the privacy concerns. Unlike the private sector, the government must be held to a higher bar. Citizens expect services that are authoritative, impartial and safe by design.  

Balancing personalisation, reliability, and neutrality is the central design challenge of AI in government.  

A long suspension bridge stretching across a deep forested valley toward a distant hill, symbolising progress, connection, and navigating uncertainty.

Exemplars should light the spark 

The AI Exemplar programme is welcome. It signals political will, provides a safe space to experiment, and showcases potential. But the bigger opportunity lies in redefining the standards that judge those exemplars. 

If the Service Standard adapts to meet the potential of these new technologies — embedding accountability, auditability, fairness and citizen-centred outcomes — the UK can once again lead the world in digital government. If it does not, exemplars risk being remembered as clever pilots that never shifted the system.  

The future of AI in public services will not be determined by technology alone, but by whether we are bold enough to rewrite the rulebook that has made digital government successful so far.