TelcoNews UK - Telecommunications news for ICT decision-makers
Uk gov service desk chatbot scottish welsh ni accents office scene

Study probes how public sector AI handles UK accents

Thu, 26th Feb 2026

ICS.AI has commissioned a national study with the University of Sheffield to examine how conversational AI used in public services understands regional accents and dialects across the UK.

The study looks at how AI performs in real interactions between citizens and public bodies, as public concern remains high. Survey findings cited by the organisations show 52% of UK residents worry that AI may struggle to understand accents or dialects. The figure rises to 71% in Scotland and 67% in Northern Ireland; in Wales it is 57%.

Public bodies are increasingly using chatbots and voice systems for routine enquiries and triage. The shift has raised questions about whether automated systems can handle the full range of ways people speak, including local terms that are friendly in one place but unfamiliar elsewhere.

Research focus

The project is led by Dr Chris Montgomery, Senior Lecturer in Dialectology at the University of Sheffield. It draws on a systematic review of more than five decades of peer-reviewed research on accent and dialect variation across Great Britain.

The work is positioned as an academic-industry effort applying sociolinguistic research to the evaluation of conversational AI in public service settings. It focuses on how systems are assessed in live or realistic service encounters, rather than narrow laboratory-style tests.

A central finding from the literature review is that bias and misunderstanding often stem from interpretation rather than pronunciation. Much existing research has focused on speech patterns, with less attention on how listeners recognise and judge accents and dialects in practice. The organisations argue this gap matters for AI assessment because conversational systems must map spoken input onto meaning, including local vocabulary, pragmatic cues and the social context in which phrases are used.

The review also found that UK research on accents and dialects is extensive but concentrated in a small number of locations. Many regions and communities are lightly represented in published evidence. That imbalance can shape how developers and evaluators build test sets and benchmarks, while strong headline performance measures can conceal uneven results across speaker groups.

In public services, uneven performance can have practical consequences. Misrecognition can prolong interactions and increase abandonment rates, and it can raise the risk that a user is routed down the wrong pathway. In some services, that can affect access, eligibility screening and safeguarding responses.

Evaluation methods

ICS.AI and the University of Sheffield plan to translate the findings into evaluation frameworks for public sector conversational AI. The aim is a structured way to evidence and communicate how systems perform for diverse communities, including those under-represented in existing research.

The partners also plan work on what they describe as new capabilities within the ICS.AI platform. The release did not provide technical detail, but linked the planned capabilities to evaluation and measurement in live public service environments.

Alongside the evaluation work, the University of Sheffield plans further research into dialects that are under-represented in existing studies. The goal is to build a stronger evidence base for future joint work and for wider discussion about how public sector AI systems should be tested.

The project comes amid wider scrutiny of algorithmic decision-making and automation in government. Regulators and procurement teams have placed greater emphasis on transparency around performance, bias and inclusion. For conversational AI, that can include demonstrating how systems behave for different users, including those with different speech patterns and local language use.

"There is already a substantial body of research on accent and dialect variation in Great Britain. What has been missing, however, is its systematic application to how conversational AI is evaluated in real public service contexts. This project brings sociolinguistic theory and evidence on listener behaviour into applied evaluation, enabling performance claims to be framed in ways that are both scientifically defensible and socially meaningful," said Dr Chris Montgomery, Senior Lecturer in Dialectology, University of Sheffield.

ICS.AI framed the collaboration as part of its broader focus on inclusivity in systems deployed across public services.

"Public sector AI has to work for everyone, not just for people whose voices or speech patterns are easiest for systems to process. This collaboration empowers ICS.AI to apply established sociolinguistic evidence directly to how conversational AI is evaluated in live public service environments, helping us build inclusivity in a transparent and scientifically grounded way," said Dr Crispin Bloomfield.

Work on the evaluation frameworks and further research into under-represented dialects will continue as the partners move from the literature review to applied methods for assessing conversational AI performance in public sector use cases.