Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Guardian - UK
The Guardian - UK
Politics
Tobi Thomas Health and inequalities correspondent

One in five GPs use AI such as ChatGPT for daily tasks, survey finds

GP at desk seeing a patient
The use of AI by doctors could undermine patient confidentiality, the BMA’s researchers said. Photograph: Stephen Barnes/Medical/Alamy

A fifth of GPs are using artificial intelligence (AI) tools such as ChatGPT to help with tasks such as writing letters for their patients after appointments, according to a survey.

The survey, published in the journal BMJ Health and Care Informatics, spoke to 1,006 GPs. They were asked whether they had ever used any form of AI chatbot in their clinical practice, such as ChatGPT, Bing AI or Google’s Gemini, and were then asked what they used these tools for.

One in five of the respondents said that they had used generative AI tools in their clinical practice and, of these, almost a third (29%) said that they had used them to generate documentation after patient appointments, while 28% said that they had used the tools to suggest a different diagnosis.

A quarter of respondents said they had used the AI tools to suggest treatment options for their patients. These AI tools, such as ChatGPT, work by generating a written answer to a question posed to the software.

The researchers said that the findings showed that “GPs may derive value from these tools, particularly with administrative tasks and to support clinical reasoning”.

However, the researchers went on to question whether these AI tools being used could risk harming and undermining patient privacy “since it is not clear how the internet companies behind generative AI use the information they gather”.

They added: “While these chatbots are increasingly the target of regulatory efforts, it remains unclear how the legislation will intersect in a practical way with these tools in clinical practice.”

Dr Ellie Mein, medico-legal adviser at the Medical Defence Union, said that the use of AI by GPs could raise issues including inaccuracy and patient confidentiality.

“This is an interesting piece of research and is consistent with our own experience of advising MDU members,” Mein said. “It’s only natural for healthcare professionals to want to find ways to work smarter with the pressures they face. Along with the uses identified in the BMJ paper, we’ve found that some doctors are turning to AI programs to help draft complaint responses for them. We have cautioned MDU members about the issues this raises, including inaccuracy and patient confidentiality. There are also data protection considerations.”

She added: “When dealing with patient complaints, AI drafted responses may sound plausible but can contain inaccuracies and reference incorrect guidelines which can be hard to spot when woven into very eloquent passages of text. It’s vital that doctors use AI in an ethical way and comply with relevant guidance and regulations. Clearly this is an evolving area and we agree with the authors that current and future doctors need greater awareness of the benefits and risks of using AI at work.”

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.