Google has quietly been testing an AIchatbot that can answer medical questions at a major hospital and research centre in the US.
The Mayo Clinic, a sprawling healthcare provider that treats 1.3 million patients per year, started using Med-PaLM 2 in April, the Wall Street Journal reported.
The partnership comes as the tech industry races to create generative AI that can serve the needs of overburdened doctors and frustrated patients.
Whereas general chatbots such as ChatGPT can hold conversations on virtually any topic, Google has customised its bot for the healthcare industry by training it on questions and answers from medical licensing exams.
Both Google and its rival Microsoft are reportedly working hard on virtual assistants that can address the needs of patients worldwide.
The Mayo Clinic trial could pave the way for such a tool. The hospital is already using a Google-powered search system to scour massive repositories of patient data. Google opened an office in 2021 near the Mayo Clinic’s mega-site in Minnesota to work on the project.
The hospital is a not-for-profit that invests the roughly £13 billion in revenue it makes annually into cutting-edge research projects. During its 159-year history, the clinic has made numerous medical breakthroughs. These include real-time frozen-section tumour analysis; the discovery of cortisone, which is used to treat inflammatory conditions such as autoimmune diseases as well as joint swelling and pain; and the first series of successful surgeries using a heart-lung bypass machine.
In late 2019, the Mayo Clinic opened its London arm in Marylebone in collaboration with the University of Oxford.
Despite the high-profile partnership, Google is playing catch-up with its chief competitor Microsoft. US hospitals are already using OpenAI’s ChatGPT algorithms through investor Micorosoft’s cloud service. One of the tools created using the tech can summarise doctor’ notes within minutes,
In the battle to conquer health care, the two rivals have also faced their fair share of mishaps. Concerns were raised in 2019 that Google was gathering detailed health record data from millions of Americans without informing patients or doctors.
Although physicians who reviewed Med-PALM 2 said they mainly preferred its answers to those produced by doctors, they also found that it included more inaccurate content in its responses compared with healthcare professionals.
This implies that the chatbot is prone to spouting misinformation, much like other generative AI tools.
Similarly, a recent report revealed that Microsoft’s bioGPT AI — which the company claimed had achieved “human parity” — spouted wildly inaccurate content. This included claims that vaccines might cause autism and that hospitals were haunted by ghosts.