What are benefits and the risks of AI in healthcare?
AI

Professor Christina Pagel, director of UCL’s Clinical Operational Research Unit, and a member of The Operational Research Society explores how AI can boost efficiency, accuracy and patient care, while warning of risks around bias, over-diagnosis and ethics.

Artificial intelligence (AI) is transforming industries worldwide. According to PwC’s 2025 AI Predictions report, top-performing organisations are shifting their focus from simply experimenting with AI use cases to embedding AI at the core of their business strategy. Healthcare is one example of where AI adoption is accelerating.

A survey of UK healthcare organisations by SOTI showed that in just one year the number of UK healthcare organisations using AI jumped from 47 per cent in 2024 to 94 per cent in 2025. Once limited to administrative functions, over half now use AI to help diagnose conditions or personalise treatment, though the most common use remains processing or analysing medical data.

AI in action 
By 2027, the roll out of validated AI diagnostic tools and administrative aids, including AI scribes, across GP practices is set to save time, up the equivalent of more than 2,000 full-time GP positions. Meanwhile, NHS England is investing £6 million in an AI research screening platform to help hospitals trial tools that analyse images and detect abnormalities . This is a taster of how AI could reshape future diagnosis and treatment. 

The UK government is fully invested in AI use across the NHS. Supporting its 10-Year Health Plan for England, a National Commission  has just been established to make the NHS “the most AI-enabled care system in the world,” accelerating safe access to AI and shaping a regulatory framework. 

Yet with any innovative technology, there are risks as well as benefits, and implementation will need to be planned carefully to ensure it’s successful and lives up to the expectations. 

The benefits of AI 
Understanding what is meant by AI matters. A few years ago, the term mainly referred to machine learning (ML) and neural networks, systems trained to recognise patterns, particularly in images. Today, AI usually refers to large language models (LLMs) such as ChatGPT. The distinction matters. ML excels in pattern recognition, classification, and prediction tasks, while LLMs manage language, synthesis, and reasoning.

In my view, medical imaging offers the clearest success case so far. ML systems can analyse scans and flag anomalies with remarkable accuracy, making radiology and pathology ideal areas for AI support. These tools assist rather than replace human experts. However large language models (LLMs) are now emerging as practical tools to support clinicians in real-world decision-making. 

These can review, summarise, and synthesise patient data, identify subtle patterns that might be missed (particularly from text data), and offer alternative explanations, essentially acting as another pair of eyes. Rather than replacing clinical judgment, they complement it by reducing common sources of error, such as cognitive bias or communication gaps within teams. By providing input without regard to hierarchy or status, LLMs have the potential to quietly strengthen the way care decisions are made. The key is that LLMs should not be used to replace clinical judgement and expertise, but to enhance it.

AI also promises to relieve administrative burdens. Tools that transcribe consultations, or draft letters, can save clinicians or support staff hours every week. In an overstretched NHS, time is precious, but efficiency must not compromise empathy, the human connection that data alone cannot capture.

By 2027, the roll out of validated AI diagnostic tools and administrative aids, including AI scribes, across GP practices is set to save time, up the equivalent of more than 2,000 full-time GP positions. Meanwhile, NHS England is investing £6 million in an AI research screening platform to help hospitals trial tools that analyse images and detect abnormalities . This is a taster of how AI could reshape future diagnosis and treatment. 

The UK government is fully invested in AI use across the NHS. Supporting its 10-Year Health Plan for England, a National Commission  has just been established to make the NHS “the most AI-enabled care system in the world,” accelerating safe access to AI and shaping a regulatory framework. 

Yet with any innovative technology, there are risks as well as benefits, and implementation will need to be planned carefully to ensure it’s successful and lives up to the expectations. 
 

Navigating the risks of AI
One of the biggest risks for ML systems analysing imaging data is that they can also detect harmless irregularities, which can contribute to over-diagnosis. Given over-diagnosis is already a concern for some conditions such as thyroid or prostate cancers, the use-case for ML must be considered on a disease-by-disease basis. 
Detecting more does not always mean treating better, and as diagnostic AI expands, the NHS must avoid over-testing and overtreatment. The seductive trap will be that the more healthy people are treated when they do not need to be, the better outcomes from your programme will look (because you are diluting the statistics with people who were always going to be fine).

AI is only as good as the data it relies on; incomplete or biased datasets risk perpetuating inequities. Historic underrepresentation of women, ethnic minorities, or older adults can skew outcomes, though AI does hold potential to correct these gaps too, for example, improving recognition of skin conditions on darker skin or ensuring women’s symptoms are accurately assessed.

There is the question of ethics and consent. For example, a small pilot research study is currently being conducted at University College London Hospital (UCLH). Researchers led by an ICU doctor are using camera-based AI to monitor sedated patients (with their prior consent) for signs of pain or delirium. Early results suggest it can improve comfort and shorten hospital stays, but it raises concerns around surveillance and how consent is sought routinely if rolled out more broadly. 

Finally, data alone cannot capture (and will never capture) the full complexity of healthcare. Lifestyle factors, patient-reported symptoms, and in-person observations are often unrecorded but vital to effective care. AI can process information faster than any clinician, but human expertise remains essential to interpret results and apply context.

Building an AI-ready future
While AI is proving effective in areas such as imaging, hospitals are far from fully AI-enabled. Most clinicians cannot use advanced LLMs due to legal, ethical, and infrastructure limitations. Real progress will require secure, ring-fenced models within hospitals, trained on local datasets under strict governance and with upgraded hardware and software. Just as much of an issue is that most UK healthcare settings are understaffed and clinical teams simply struggle to find the time or energy to learn new systems. 

AI offers immense promise though to improve efficiency, accuracy, and patient care. It can streamline administrative work, enhance diagnosis, reduce bias, and support clinical decision-making. Yet its success depends on the people who design, deploy, and interpret it. AI will not replace health professionals, but when implemented responsibly, it can help them work smarter, placing patients at the centre of healthcare. 
 

Navigating the risks of AI
One of the biggest risks for ML systems analysing imaging data is that they can also detect harmless irregularities, which can contribute to over-diagnosis. Given over-diagnosis is already a concern for some conditions such as thyroid or prostate cancers, the use-case for ML must be considered on a disease-by-disease basis. 
Detecting more does not always mean treating better, and as diagnostic AI expands, the NHS must avoid over-testing and overtreatment. The seductive trap will be that the more healthy people are treated when they do not need to be, the better outcomes from your programme will look (because you are diluting the statistics with people who were always going to be fine).

AI is only as good as the data it relies on; incomplete or biased datasets risk perpetuating inequities. Historic underrepresentation of women, ethnic minorities, or older adults can skew outcomes, though AI does hold potential to correct these gaps too, for example, improving recognition of skin conditions on darker skin or ensuring women’s symptoms are accurately assessed.

There is the question of ethics and consent. For example, a small pilot research study is currently being conducted at University College London Hospital (UCLH). Researchers led by an ICU doctor are using camera-based AI to monitor sedated patients (with their prior consent) for signs of pain or delirium. Early results suggest it can improve comfort and shorten hospital stays, but it raises concerns around surveillance and how consent is sought routinely if rolled out more broadly. 

Finally, data alone cannot capture (and will never capture) the full complexity of healthcare. Lifestyle factors, patient-reported symptoms, and in-person observations are often unrecorded but vital to effective care. AI can process information faster than any clinician, but human expertise remains essential to interpret results and apply context.
 

Building an AI-ready future
While AI is proving effective in areas such as imaging, hospitals are far from fully AI-enabled. Most clinicians cannot use advanced LLMs due to legal, ethical, and infrastructure limitations. Real progress will require secure, ring-fenced models within hospitals, trained on local datasets under strict governance and with upgraded hardware and software. Just as much of an issue is that most UK healthcare settings are understaffed and clinical teams simply struggle to find the time or energy to learn new systems. 

AI offers immense promise though to improve efficiency, accuracy, and patient care. It can streamline administrative work, enhance diagnosis, reduce bias, and support clinical decision-making. Yet its success depends on the people who design, deploy, and interpret it. AI will not replace health professionals, but when implemented responsibly, it can help them work smarter, placing patients at the centre of healthcare.