Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Guardian - UK
The Guardian - UK
Technology
Alex Hern Technology editor

Over 90% of child sexual abuse imagery is self-generated, data shows

A child looks at a smartphone
The Internet Watch Foundation said 10 years ago self-generated content was not something it found at all in its monitoring of child sexual abuse online. Photograph: Peter Byrne/PA

More than 90% of child sexual abuse imagery found on the internet is now self-generated, according to the charity responsible for finding and removing such material.

The Internet Watch Foundation said that it discovered self-generated child sexual abuse material (CSAM) featuring children under 10 on more than 100,000 webpages in the last year. That figure is an increase of 66% on the year before.

In total, a record 275,655 webpages were confirmed to contain CSAM, the IWF said, an increase of 8%. The new data prompted a renewed attack on end-to-end encryption from the UK government, backed by the IWF.

The rise in imagery discovered and removed is not necessarily problematic, said the charity’s chief executive Susie Hargreaves, as some of the increase could be accounted for by better detection.

“It does mean we’re detecting more, but I don’t think it’s ever a good thing if you’re finding loads more child sexual abuse,” Hargreaves added. “Obviously the IWF would be most successful if we didn’t find any images of child sexual abuse. Our mission is the elimination of child sexual abuse – it’s not just to find as much as possible and take it down.”

Some of the self-generated imagery was created by children as young as three years old, the IWF said, and a fifth was ranked as containing “category A” harm, the most severe types of sexual abuse.

“Ten years ago we hadn’t seen self-generated content at all, and a decade later we’re now finding that 92% of the webpages we remove have got self-generated content on them,” Hargreaves said. “That’s children in their bedrooms and domestic settings where they’ve been tricked, coerced or encouraged into engaging in sexual activity which is then recorded and shared by child sexual abuse websites.”

The charity said that the new figures, the first data it has put together from 2023, underscore its opposition to Meta’s plans to turn on end-to-end encryption for Messenger, a security feature that would blind the company to content being shared on its service. The company reported 20 million incidents of people sharing CSAM in 2022 to the IWF’s US equivalent, the National Center for Missing & Exploited Children [NCMEC], and the IWF fears that almost all of those reports would be lost. Hargreaves also criticised Apple for dropping plans to scan for CSAM on iPhones in a way the company had initially argued was privacy-preserving.

“With so many organisations looking to do the right thing in the light of new regulations in the UK, it is incomprehensible that Meta is deciding to look the other way and offer criminals a free pass to further share and spread abuse imagery in private and undetected,” she said.

“Decisions like this, as well as Apple opting to drop plans for client-side scanning to detect the sharing of abuse, are baffling given the context of the spread of this imagery on the wider web.”

Tom Tugendhat, the UK security minister, said: “This alarming report clearly shows that online child sexual abuse is on the rise, and the victims are only getting younger. And yet, despite warnings from across government, charities, law enforcement and our international partners, Meta have taken the extraordinary decision to turn their backs on these victims, and provide a ‘safe space’ for heinous predators.

“The decision to roll out end-to-end encryption on Facebook Messenger without the necessary safety features, will have a catastrophic impact on law enforcement’s ability to bring perpetrators to justice.

In a statement, a Meta spokesperson said it expected to continue providing more reports to NCMEC than others. “Encryption helps keep people, including children, safe from hackers, scammers and criminals. We don’t think people want us reading their private messages so have spent years developing robust safety measures to prevent, detect and combat child abuse while maintaining online security.

“Our recently published report detailed these measures, such as restricting over-19s from messaging teens who don’t follow them and using technology to identify and take action against malicious behaviour. We routinely provide more reports to NCMEC than others, and given our ongoing investments, we expect that to continue.”

Apple did not reply to a request for comment. The company “delayed” its plans for so-called client-side scanning of iPhones a month after announcing them, and has never publicly acknowledged that they have been dropped for good.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.