Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Guardian - UK
The Guardian - UK
Business
Alex Hern UK technology editor

Tech firms must ‘tame’ algorithms under Ofcom child safety rules

A child's fingers typing on a computer keyboard
Seriously harmful content, such as suicide, eating disorders and pornography, will have to be kept off a child’s feed. Photograph: Dominic Lipinski/PA

Social media firms have been told to “tame aggressive algorithms” that recommend harmful content to children, as part of Ofcom’s new safety codes of practice.

The children’s safety codes, introduced as part of the Online Safety Act, let Ofcom set new, tight rules for internet companies and how they can interact with children. It calls on services to make their platforms child-safe by default or implement robust age checks to identify children and give them safer versions of the experience.

For those sites with age checks, Ofcom will require algorithmic curation to be tweaked to limit the risks to younger users. That would require sites such as Instagram and TikTok to ensure the suggested posts and “for you” pages explicitly take account of the age of children.

They will also have to put extra effort into suppressing the spread of harmful content, such as “violent, hateful or abusive material, online bullying, and content promoting dangerous challenges”.

More seriously harmful content, including that relating to suicide, self-harm and eating disorders, will need to be kept off children’s feeds entirely, as will pornography.

Enforcing the new requirements will pose a challenge. Algorithmic curation is often described as a “black box”, with some companies unsure how their own systems decide what content to promote and suppress. But Ofcom is confident that its enforcement will be effective, says Gill Whitehead, the regulator’s Online Safety lead.

“We’ve spoken to 15,000 children in the last two years in the run up to today, and they tell us the types of harmful content they’re seeing, how it appears, and how often they’re seeing it. And we also have very strong information-gathering powers, in order to request that data and require tech firms to provide that data to us.

“The big change is that the very harmful content that [children] are seeing needs to be filtered out so that they’re not seeing it. And then harmful content, like violent or harmful substances, or dangerous challenges or stunts needs to be down ranked, so they’re seeing it far less often. So those sort of powerful combinations of volume and intensity won’t be as prolific or damaging for children as it is today.”

The draft code is open for consultation until 17 July, before it is finalised and presented to parliament, with services given three months to conduct their own children’s risk assessments, which must be completed before enforcement begins.

The Ofcom chief executive, Dame Melanie Dawes, said: “We want children to enjoy life online. But, for too long, their experiences have been blighted by seriously harmful content which they can’t avoid or control. Many parents share feelings of frustration and worry about how to keep their children safe. That must change.

“In line with new online safety laws, our proposed codes firmly place the responsibility for keeping children safer on tech firms. They will need to tame aggressive algorithms that push harmful content to children in their personalised feeds and introduce age checks so children get an experience that’s right for their age.

“Our measures, which go way beyond current industry standards, will deliver a step-change in online safety for children in the UK. Once they are in force we won’t hesitate to use our full range of enforcement powers to hold platforms to account. That’s a promise we make to children and parents today.”

UK technology secretary Michelle Donelan said: “The government assigned Ofcom to deliver the act and today the regulator has been clear; platforms must introduce the kinds of age-checks young people experience in the real world and address algorithms which too readily mean they come across harmful material online.

“Once in place, these measures will bring in a fundamental change in how children in the UK experience the online world.”

The child online safety campaigner Ian Russell, the father of 14-year-old Molly Russell who took her own life in November 2017 after viewing harmful material on social media, said more still needed to be done to protect young people from online harm.

In his role as chair of online safety charity the Molly Rose Foundation, Russell said: “Ofcom’s task was to seize the moment and propose bold and decisive measures that can protect children from widespread but inherently preventable harm.

“The regulator has proposed some important and welcome measures, but its overall set of proposals need to be more ambitious to prevent children encountering harmful content that cost Molly’s life.”

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.