Alfred Health director of physician education and rheumatologist Anne Powell said 50% of doctors in hospitals are burnt out.
“If you could have an AI go through and do all your discharge summaries, that would be great,” Powell told Crikey.
There may be roles for AI in a “paperwork heavy” and “bureaucratic rich” Australian hospital setting, but how does it fare passing a medical examination?
In research published today, ChatGPT was put to the test in the US, made to sit the first three steps of the June 2022 US Medical Licensing Examination (USMLE) — a whole 350 questions spanning biochemistry, diagnostic reasoning and bioethics. The study (which ChatGPT also had a hand in writing) found the AI teetered on a 60% pass, scoring between 52.4% and 75%. Its responses were deemed “coherent”, insightful (88.9% of answers offered something “new, non-obvious, and clinically valid”), and said to make “internal sense”.
It even outperformed rival chatbot — and specialist in biomedical literature — PubMedGPT by 0.2%.
So what are its chances of becoming a doctor in Australia? Short answer: low.
Powell was firm on the fact that any medical practitioner (at least on her watch) must come with a body, but said she would not be surprised (or impressed) if AI was able to pass an Australian written medical examination.
“People often think you need to be smart to be a doctor, but you don’t have to be that smart. There’s just a sheer amount of knowledge you have to sift through,” she said.
“AI has access to all the books. Most medical students could pass an exam if it was all open book.”
ChatGPT is unique from other AI chatbots in that it cannot search the internet, relying instead on a predictive word model drawn from an internal repository of knowledge. It’s these qualities that US researchers believe will make it a strong educational tool.
They have high hopes that ChatGPT can “assist human learners in a medical education setting” and even higher hopes that this will be a “prelude to future integration into clinical decision-making”.
Philip Pigou is CEO of the Australian Medical Council, the body responsible for accrediting medical training in the country. He told Crikey that AI (as a student or teacher) does not feature in the current curriculum, but the organisation is watching its rise.
“We’re interested observers more than we are setting the exact framework for AI, but AI is clearly coming in and affecting things. It’s a matter of trying to understand what that effect is,” he said.
From inside the hospital, Powell does not see AI as an up-and-coming teacher (given that the vast majority of education delivered in hospitals is “hands-on practical learning”), but says it has multiple medical roles to play.
Be it pumping through paperwork, triaging patients, identifying disease based on pattern recognition (particularly relevant to pathology and dermatology), facilitating remote surgery, or simply upping the ante on Google Doctor, Powell says AI will be a “helpful adjunct and side tool”, less so a driver.
“It’s more a repository of knowledge, a mechanism to get information quickly. I can ask it for the weather in French,” Powell said, something notoriously difficult to diagnose.
Translating medical jargon into common vernacular and then again into another language is a big hoop for hospitals. Currently, the former is a task for medical practitioners themselves, while the latter is on interpreters.
“I’d love to use the translation function to give patients information quickly. For example, ‘give an explanation of the cause of rheumatoid arthritis in Arabic’. But at the moment I have no idea what it’s saying so we can’t use it in the hospital,” Powell said.