Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Guardian - US
The Guardian - US
Technology
Katie McQue

UK watchdog accuses Apple of failing to report sexual images of children

The Apple Inc logo, a white apple, is featured on a building
In 2023, Apple made just 267 reports of suspected child sexual abuse material on its platforms worldwide to National Center for Missing & Exploited Children. Photograph: Stephen Lam/Reuters

Apple is failing to effectively monitor its platforms or scan for images and videos of the sexual abuse of children, child safety experts allege, which is raising concerns about how the company can handle growth in the volume of such material associated with artificial intelligence.

The UK’s National Society for the Prevention of Cruelty to Children (NSPCC) accuses Apple of vastly undercounting how often child sexual abuse material (CSAM) appears in its products. In a year, child predators used Apple’s iCloud, iMessage and Facetime to store and exchange CSAM in a higher number of cases in England and Wales alone than the company reported across all other countries combined, according to police data obtained by the NSPCC.

Through data gathered via freedom of information requests and shared exclusively with the Guardian, the children’s charity found Apple was implicated in 337 recorded offenses of child abuse images between April 2022 and March 2023 in England and Wales. In 2023, Apple made just 267 reports of suspected CSAM on its platforms worldwide to the National Center for Missing & Exploited Children (NCMEC), which is in stark contrast to its big tech peers, with Google reporting more than 1.47m and Meta reporting more than 30.6m, per NCMEC’s annual report.

All US-based tech companies are obligated to report all cases of CSAM they detect on their platforms to NCMEC. The Virginia-headquartered organization acts as a clearinghouse for reports of child abuse from around the world, viewing them and sending them to the relevant law enforcement agencies. iMessage is an encrypted messaging service, meaning Apple is unable to see the contents of users’ messages, but so is Meta’s WhatsApp, which made roughly 1.4m reports of suspected CSAM to NCMEC in 2023.

“There is a concerning discrepancy between the number of UK child abuse image crimes taking place on Apple’s services and the almost negligible number of global reports of abuse content they make to authorities,” said Richard Collard, head of child safety online policy at the NSPCC. “Apple is clearly behind many of their peers in tackling child sexual abuse when all tech firms should be investing in safety and preparing for the roll out of the Online Safety Act in the UK.”

Apple declined to provide a comment for this article. The company instead directed the Guardian to statements it made last August, in which it said it had decided not to proceed with a program scanning iCloud photos for CSAM because it instead chose a path that “prioritizes the security and privacy of [its] users”.

In late 2022, Apple abandoned plans to roll out the iCloud photo-scanning tool. Apple’s tool, called neuralMatch, would have scanned images before they were uploaded to the iCloud’s online photo storage, comparing them against a database of known child abuse imagery via mathematical fingerprints known as hash values.

However, the software was subject to pushback by digital rights groups, who voiced concerns that it would inevitably be used to compromise the privacy and security of all iCloud users. Child safety advocates decried the rollback of the feature.

“Apple does not detect CSAM in the majority of its environments at scale, at all,” said Sarah Gardner, chief executive officer of Heat Initiative, a Los Angeles non-profit focused on child protection. “They are clearly underreporting and have not invested in trust and safety teams to be able to handle this.”

Apple’s June announcement that it will launch an artificial intelligence system, Apple Intelligence, has been met with alarm by child safety experts.

“The race to roll out Apple AI is worrying when AI-generated child abuse material is putting children at risk and impacting the ability of police to safeguard young victims, especially as Apple pushed back embedding technology to protect children,” said Collard. Apple says the AI system, which was created in partnership with OpenAI, will customize user experiences, automate tasks and increase privacy for users.

In 2023, NCMEC received more than 4,700 reports of AI-generated CSAM, and has said it expects the number of such reports to increase in the future. Since AI models able to create CSAM have been trained on “real life” images of child abuse, AI-generated images also are implicated in the victimization of children. The Guardian reported in June that child predators are utilizing AI to create new images of their favorite victims, further exacerbating the trauma of survivors or child abuse imagery.

“The company is moving ahead to a territory that we know could be incredibly detrimental and dangerous to children without the track record of being able to handle it,” said Gardner. “Apple is a black hole.”

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.