Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Guardian - AU
The Guardian - AU
Technology
Josh Taylor

AMA calls for stronger AI regulations after doctors use ChatGPT to write medical notes

Healthworker with stethoscope and mobile phone.
Five hospitals in Perth’s South Metropolitan Health Service were advised in May to stop using ChatGPT to write medical records for patients. Photograph: Moyo Studio/Getty Images

Australia’s peak medical association has called for strong rules and transparency around the use of artificial intelligence in the healthcare industry, after a warning to doctors in Perth-based hospitals not to write medical notes using ChatGPT.

The Australian Medical Association said in its submission to the federal government’s discussion paper on safe and responsible AI, seen by the Guardian, that Australia lags behind other comparable countries in regulating AI and noted that stronger rules are needed to protect patients and healthcare professionals, and to engender trust.

Five hospitals in Perth’s South Metropolitan Health Service were advised in May to stop using ChatGPT to write medical records for patients after it was discovered some staff had been using the large language model for the practice.

The ABC reported that in an email to staff, the service’s CEO Paul Forden said there was no assurance of patient confidentiality using such systems, and it must stop.

AI protections should include ensuring that clinicians make the final decisions, and that there is informed consent by the patient for any treatment or diagnostic undertaken using AI.

The AMA also said patient data must be protected and appropriate ethical oversight is needed to ensure the system does not lead to greater health inequalities.

The proposed EU Artificial Intelligence Act – which would categorise the risks of different AI technologies and establish an oversight board – should be considered for Australia, the AMA said. Canada’s requirement for human intervention points for decision-making should also be considered, it said.

“Future regulation should ensure that clinical decisions that are influenced by AI are made with specified human intervention points during the decision-making process,” the submission states.

“The final decision must always be made by a human, and this decision must be a meaningful decision, not merely a tick box exercise.”

“The regulation should make clear that the ultimate decision on patient care should always be made by a human, usually a medical practitioner.”

The AMA president, Prof Steve Robson said AI is a rapidly evolving field, and the understanding of it varies from person to person.

“We need to address the AI regulation gap in Australia, but especially in healthcare where there is the potential for patient injury from system errors, systemic bias embedded in algorithms and increased risk to patient privacy,” he said in a statement.

Google’s chief health officer, Dr Karen DeSalvo, told Guardian Australia earlier this month that AI will ultimately improve health outcomes for patients, but stressed the importance of getting the systems right.

“We have a lot of things to work out to make sure the models are constrained appropriately, that they’re factual, consistent, and that they follow these ethical and equity approaches that we want to take – but I’m super excited about the potential.”

A Google research study published in Nature this month found Google’s own medical large language model generated answers on par with answers from clinicians 92.9% of the time when asked the most common medical questions searched online.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.