A new report from the Ada Lovelace Institute has found that AI transcription tools are saving time in social work, but are also introducing new risks to people such as bias and ‘hallucinations’.
39 social workers with experience of using AI across 17 local authorities were interview by the researchers, alongside senior staff members involved in procuring and evaluating them.
The research found that resource restraints are inspiring widespread piloting, adoption and evaluation of AI transcription tools.
It was found that local authority evaluations focus on efficiency rather than the impact on people who receive care.
Almost all of the social workers interviewed said that AI transcription tools bring meaningful benefits to their work, but they do not experience these benefits uniformly and perceptions of reliability, accuracy and the need for oversight vary significantly.
The research also concluded that there is no consensus on when it is appropriate to use AI transcription tools in social care.
The report set out a series of recommendations to improve the governance and evaluation of AI transcription tools to ensure they are used safely and responsibly.
These include that the government should require that local authorities record their use of AI transcription tools through the Algorithmic Transparency Reporting Standard and that the government should extend its pilots of AI transcription tools to include various locations and public sector contexts.
The researchers recommended that the government should set up a What Works Centre for AI in Public Services to generate and synthesise learnings from pilots and evaluations and that a coalition of researchers, policymakers, civil society and community groups should collaborate on research on the systemic impacts of AI transcription tools.
The report also recommended that The UK government should create and disseminate context-specific evidence about the contribution of AI adoption to cost savings and productivity and that local authorities should specify their outcomes and expected impact when procuring AI transcription tools to ensure a shared understanding among staff and users.
Lara Groves, Senior Researcher at the Ada Lovelace Institute and co-author of the research, said: “AI may be able to help make aspects of public services more efficient, and so policymakers should absolutely be looking at how technology can help public sector workers. However, delivering time savings is not necessarily the same thing as delivering public benefit, especially if these come at the cost of inaccuracy or unaccountability. In the rush to adopt AI in the public sector, it is essential that policymakers don’t lose sight of the wider risks to people and society and the need for responsible governance.”
Oliver Bruff, Researcher at the Ada Lovelace Institute and co-author of the research, said: “The safe and effective use of AI technologies in public services requires more than small-scale or narrowly scoped pilots. Ensuring that AI in the public sector works for people and society requires taking a much deeper and more systematic approach to evaluating the broader impacts of AI, as well as working with frontline professionals and affected communities to develop stronger regulatory guidelines and safeguards.”