
Online safety regulator Ofcom has been accused of having a “muddled and confused” response to regulating the dangers of AI chatbots which could pose a “clear risk” to the public.
Andy Burrows, chief executive of online safety and suicide prevention charity, the Molly Rose Foundation, said too many AI chatbots were being rushed out by tech firms in a battle for market share in the new, but rapidly growing space of generative AI (Gen AI).
Last week, the Wall Street Journal reported that it had found Meta’s AI chatbots and virtual personas will take part in romantic and even sexual role-plays with users, including children.
The report said Meta had called the testing manipulative and unrepresentative of how most users would engage with chatbots, but made changes to its products after seeing the findings.
Mr Burrows said this latest report should prompt greater action from Ofcom to more tightly regulate AI chatbots under the Online Safety Act, a subject he said the regulator has not been clear enough on.
“Every week brings fresh evidence of the lack of basic safeguarding protections in AI generated chatbots that are being hurriedly rushed out by tech companies in an all too familiar battle for market share,” he said.
“Despite this, Ofcom’s response to the risks remains muddled and confused.
“The regulator has repeatedly declined to state whether chatbots can even trigger the illegal safety duties set out in the Act.
”If there are loopholes in the Act, Ofcom should stop dodging the question and start providing clarity on how we need to plug them.
“From child sex abuse to inciting acts of violence and even suicide, poorly regulated chatbots are a clear risk to the safety of individuals and the public.”
Asked about the subject during an evidence session of the Science, Innovation and Technology Committee on Tuesday, Ofcom’s director for online safety strategy delivery, Mark Bunting, acknowledged that the “legal position” was “not entirely clear” and “complex”.
“The first thing to say is that Gen AI content that meets the definitions of illegal content, or content that is harmful to children is treated in the Act exactly the same way as any other type of content,” he told MPs.
“The Act is deliberately drawn in a way that’s technology neutral.
“There are areas of the technology where we think the legal position is not entirely clear or it’s complex.
“So, for example, chatbots and the character services that we’ve seen linked with harm in the last few months, we think they are caught by the Act in some circumstances, but not necessarily all circumstances.
“The mere fact of chatting with a chatbot is probably not a form of interaction which is captured by the Act, so there will be things there that we’ll want to continue to monitor.
“We’ll want to talk to industry about those things where we think that there’s more that could be done – we’d be very happy to work with Government and parliament to try to build on the legislation that’s already in place.”
Online safety groups have raised a number of concerns around AI chatbots, including that they can easily and quickly spread misinformation because of flawed training data or through AI hallucinations, as well as through AI-image generation tools being used to create child sexual abuse material.
Earlier this month, the safety organisation the Internet Watch Foundation (IWF) reported finding record levels of web pages hosting child sexual abuse material in 2024, and warned that AI-generated content were a key factor in that rise.