spot_img
8.5 C
London
Saturday, October 5, 2024
spot_img
HealthOne in five UK doctors use AI chatbots

One in five UK doctors use AI chatbots

Click on image to read pdf of article

A survey led by researchers at Uppsala University in Sweden reveals that a significant proportion of UK general practitioners (GPs) are integrating generative AI tools, such as ChatGPT, into their clinical workflows. The results highlight the rapidly growing role of AI in healthcare—a development that has the potential to revolutionise patient care but also raises significant ethical and safety concerns.


‘While there is much talk about the hype of AI, our study suggests that the use of AI in healthcare is not just on the horizon—it’s happening now. Doctors are deriving value from these tools. The medical community must act swiftly to address the ethical and practical challenges for patients that generative AI brings,’ says lead researcher Dr Charlotte Blease, Associate Professor at Uppsala University.


The study reveals that 20 per cent of GPs reported using generative AI tools in their practice, with ChatGPT being the most frequently used AI tool. Conducted with collaborators at Harvard Medical School, Boston, USA and the University of Basel, Switzerland, this is the most comprehensive examination of generative AI in clinical practice since the launch of ChatGPT in November 2022.


The study was conducted in February 2024 and researchers surveyed 1,006 GPs registered with Doctors.net.uk, the largest professional network for UK doctors.


Apart from revealing that 20 per cent of GPs used AI tools in their practice, the study also shows that among users, 29 per cent utilised these tools for generating documentation after patient appointments, while 28 per cent employed them to assist with differential diagnosis.
These findings suggest that AI chatbots are becoming valuable assets in medical practice, particularly in reducing administrative burdens and supporting clinical decision-making.

However, the use of these tools is not without risks. The potential for AI to introduce errors (“hallucinations”), exacerbate biases, and compromise patient privacy is significant. As these tools continue to evolve, there is an urgent need for the healthcare industry to establish robust guidelines and training programmes to ensure their safe and effective use.


‘This study underscores the growing reliance on AI tools by UK GPs, despite the lack of formal training and guidance and the potential risks involved. As the healthcare sector and regulatory authorities continue to grapple with these challenges, the need to train doctors to be 21st century physicians is more pressing than ever,’ Blease concludes.


This study was supported by the Research Council on Health, Working Life and Welfare Beyond Implementation of eHealth (2020-0122) and by the University of Basel.

Past Features

- Advertisement -spot_img

Latest articles

More articles

- Advertisement -spot_img