Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Guardian - UK
The Guardian - UK
Technology
James Tapper

Warning over use in UK of unregulated AI chatbots to create social care plans

The use of AI presents a potential risk to patient confidentiality, according to academics
The use of AI presents a potential risk to patient confidentiality, according to academics Photograph: Yuichiro Chino/Getty Images

Britain’s hard-pressed carers need all the help they can get. But that should not include using unregulated AI bots, according to researchers who say the AI revolution in social care needs a hard ethical edge.

A pilot study by academics at the University of Oxford found some care providers had been using generative AI chatbots such as ChatGPT and Bard to create care plans for people receiving care.

That presents a potential risk to patient confidentiality, according to Dr Caroline Green, an early career research fellow at the Institute for Ethics in AI at Oxford, who surveyed care organisations for the study.

“If you put any type of personal data into [a generative AI chatbot], that data is used to train the language model,” Green said. “That personal data could be generated and revealed to somebody else.”

She said carers might act on faulty or biased information and inadvertently cause harm, and an AI-generated care plan might be substandard.

But there were also potential benefits to AI, Green added. “It could help with this administrative heavy work and allow people to revisit care plans more often. At the moment, I wouldn’t encourage anyone to do that, but there are organisations working on creating apps and websites to do exactly that.”

Technology based on large language models is already being used by health and care bodies. PainChek is a phone app that uses AI-trained facial recognition to identify whether someone incapable of speaking is in pain by detecting tiny muscle twitches. Oxevision, a system used by half of NHS mental health trusts, uses infrared cameras fitted in seclusion rooms – for potentially violent patients with severe dementia or acute psychiatric needs – to monitor whether they are at risk of falling, the amount of sleep they are getting and other activity levels.

Projects at an earlier stage include Sentai, a care monitoring system using Amazon’s Alexa speakers for people without 24-hour carers to remind them to take medication and to allow relatives elsewhere to check in on them.

The Bristol Robotics Lab is developing a device for people with memory problems who have detectors that shut off the gas supply if a hob is left on, according to George MacGinnis, challenge director for healthy ageing at Innovate UK.

“Historically, that would mean a call out from a gas engineer to make sure everything was safe,” MacGinnis said. “Bristol is developing a system with disability charities that would enable people to do that safely themselves.

“We’ve also funded a circadian lighting system that adapts to people and helps them regain their circadian rhythm, one of the things that gets lost in dementia.”

While people who work in creative industries are worried about the possibility of being replaced by AI, in social care there about 1.6 million workers and 152,000 vacancies, with 5.7 million unpaid carers looking after relatives, friends or neighbours.

“People see AI in binary ways – either it replaces a worker or you carry on as we are now,” said Lionel Tarassenko, professor of engineering science and president of Reuben College, Oxford. “It’s not that at all – it’s taking people who have low levels of experience and upskilling them to be at the same level as someone with great expertise.

“I was involved in the care of my father, who died at 88, just four months ago. We had a live-in carer. When we went to take over at the weekend, my sister and I were effectively caring for somebody we loved deeply and knew well who had dementia, but we didn’t have the same level of skills as live-in carers. So these tools would have enabled us to get to a similar level as a trained, experienced carer.”

However, some care managers fear that using AI tech creates a risk that they will inadvertently break rules and lose their licence. Mark Topps, who works in social care and co-hosts The Caring View podcast, said people working in social care were worried that by using technology they might inadvertently break Care Quality Commission rules and lose their registration.

“Until the regulator releases guidance, a lot of organisations won’t do anything because of the backlash if they get it wrong,” he said.

Last month, 30 social care organisations including the National Care Association, Skills for Care, Adass and Scottish Care met at Reuben College to discuss how to use generative AI responsibly. Green, who convened the meeting, said they intended to create a good practice guide within six months and hoped to work with the CQC and the Department for Health and Social Care.

“We want to have guidelines that are enforceable by the DHSC which define what responsible use of generative AI and social care actually means,” she said.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.