AI in Public Services
A Step Forward or a Cost-Cutting Exercise?
AI promises to modernise the UK’s public sector, from automating routine services to transforming healthcare diagnostics. But the conversation has been dominated by efficiency metrics and budget projections, not ethical consequences or citizen wellbeing.
Is this a digital revolution designed to empower citizens, or a calculated move to stretch shrinking budgets without addressing deeper structural inequalities?
In our view, adopting AI in public services must be seen not just as a technological upgrade, but as a political and moral decision. Technology never arrives in a vacuum; it brings with it assumptions about who benefits, who controls the systems, and what kind of society we want to build.
Improving Public Experience - or Replacing It?
Opportunities for Better Services
24/7 response: Chatbots and virtual assistants can answer routine queries without delays, HMRC, NHS 111, and local councils are already trialling such tools. 91% of councils in England use some form of AI or automation in service delivery (LocalGov Digital, 2023).
Precision diagnostics: AI in the NHS has shown remarkable results in areas like cancer screening, with some models reducing diagnostic times by 30%.
Tailored education: Adaptive learning systems in UK schools (such as CENTURY Tech) personalise content and support teachers.
Accessibility: Real-time translation, transcription, and text simplification offer vital support for non-native speakers and neurodiverse individuals.
But these successes are conditional. Success in a controlled pilot is not the same as success at scale. AI will not erase complexity from human services, nor should it.
Risks of Poor Substitution
Design flaws: Many AI systems fail when presented with real-world ambiguity. Poorly designed public-facing bots routinely escalate frustration instead of resolving it. 73% of users in a Gov.uk chatbot pilot said they preferred a human agent for anything beyond simple queries.
Dehumanisation: Social care, mental health, and child protection require empathy, not automation.
Digital exclusion: More than 8 million UK adults still lack basic digital skills(Lloyds Bank 2023). What happens when their only route to public services is an app they cannot navigate?
Key Policy Questions
Should AI systems always have a human fallback option for escalations and complex queries?
Will digital-first models entrench inequality, privileging those already digitally literate?
2. Trust, Bias, and the Hidden Costs of Automation
Ethical Flashpoints
The same data that powers efficiency also powers decision-making. In the wrong hands, or even with the wrong assumptions, AI can codify existing social inequalities.
Bias: Algorithms trained on biased datasets have already shown disproportionate impact on marginalised communities, whether in housing, health, or policing. Only 42% of public sector bodies currently perform formal bias audits on their AI systems (GovAI 2024).
Opacity: Many AI systems are “black boxes.” Citizens may never know why they were denied benefits or flagged as high-risk. Ada Lovelace Institute found that 60% of the UK public is uncomfortable with AI making decisions about welfare or policing.
Surveillance: Predictive policing, automated fraud detection, and facial recognition all pose creeping threats to civil liberties.
The Dutch childcare benefits scandal (2019–2020) offers a clear warning: reliance on opaque systems can cause irreparable damage to families, and to democratic trust.
Frameworks and Fallbacks
The UK has launched promising initiatives like the Algorithmic Transparency Standard.
Human-in-the-loop models are encouraged for high-stakes decisions.
Public sector AI ethics boards (e.g. within NHSX) offer some scrutiny.
But these are still early, inconsistent, and often underfunded. Transparency must be more than a checkbox. It must be enforceable.
Key Policy Questions
Should there be a Public AI Ombudsman empowered to investigate and overturn unjust algorithmic decisions?
Who is accountable when automated decisions go wrong, software vendors, departments, or ministers?
3. Cost-saving or Austerity by Algorithm?
AI promises transformative efficiencies, but not without cost. Not just financial cost, but cost to institutional autonomy and social trust.
Optimistic Projections
Automation could save the UK public sector £12–15 billion annually by 2035 (McKinsey UK).
Predictive analytics can pre-empt hospital admissions, detect early signs of homelessness, and flag risks to children.
A £110m Cabinet Office fund (2024) supports AI pilot projects aimed at public good.
But Who Pays for Failure?
57% of AI pilots in the public sector never progress beyond trial phase (AI in the Public Sector roadmap, 2023), largely due to cost, integration complexity, or public resistance.
Lock-in risks: Government departments may become reliant on private vendors with little long-term transparency or flexibility.
No shared success metrics: Without KPIs that track public satisfaction, outcomes, and inclusion, cost-saving may become the only visible ‘win’.
And perhaps most worryingly, no national debate has addressed what AI-powered austerity looks like. It’s cheaper to automate, but is it better?
Key Policy Questions
Should Parliament establish national AI metrics for success, covering outcomes, not just efficiencies?
How will departments retain control over service design if foundational technologies are outsourced?
4. The Deeper Question: What Kind of State Are We Building?
The risk is not simply that AI will fail. The greater risk is that it succeeds, on the wrong terms.
If AI becomes a mechanism to justify cuts, strip services of human interaction, and silence appeals, then the UK is not enhancing services, but weakening the social contract.
AI should be a force multiplier for human services, not a justification to remove them. That means putting inclusion, trust, and accountability first. It means recognising that public services are not tech platforms, they are the infrastructure of dignity in society.
Final Thoughts
The deployment of AI in public services cannot be framed solely as a technical or fiscal decision. It is a societal one.
We must ask who benefits, who is excluded, and who decides how AI is used.
Without robust safeguards, transparency, and citizen engagement, AI risks entrenching inequality and reducing access under the guise of innovation.
A truly innovative approach would centre public need, not just budget logic.