Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Guardian - AU
The Guardian - AU
National
Josh Taylor

Australia releases new online safety standards to tackle terror and child sexual abuse content

an iphone and laptop
The proposed standards require the operators of cloud or messaging services to detect and remove known child abuse material and pro-terror material ‘where technically feasible’. Photograph: Jirapatch Iamkate/Alamy

Australia’s online safety regulator appears to be heading off a potential battle with Apple over its encrypted messaging app iMessage, with the release of new standards that the regulator says will tackle terrorist content and child abuse material but won’t compromise end-to-end encryption.

In June, the eSafety commissioner, Julie Inman Grant, rejected two industry-designed regulatory codes because they didn’t require cloud storage services, email or encrypted messaging services to detect child abuse material. Instead, the regulator began working on mandatory standards, which were released in draft form on Monday.

The proposed standards require the operators of cloud or messaging services to detect and remove known child abuse material and pro-terror material “where technically feasible”, as well as disrupt and deter new material of the same nature.

In stating that it will only be required if technically feasible, eSafety has stressed that it “does not advocate building in weaknesses or back doors to undermine privacy and security on end-to-end encrypted services”.

“eSafety is not requiring companies to break end-to-end encryption through these standards nor do we expect companies to design systematic vulnerabilities or weaknesses into any of their end-to-end encrypted services,” Inman Grant said.

“But operating an end-to-end encrypted service does not absolve companies of responsibility and cannot serve as a free pass to do nothing about these criminal acts.”

In proceeding on the grounds of what is “technically feasible”, the regulator may avoid a similar fight to that which Apple waged against the UK government earlier this year.

The tech company, among other encrypted communications app providers, threatened to remove iMessage from the UK if message scanning requirements were brought in as part of local online safety laws. The UK government ultimately capitulated in September, shelving the plans unless scanning content becomes “technically feasible”.

The commissioner argued technology such as hashing – which can give known material a unique value and include it in a database – is a technically feasible method.

Inman Grant pointed to Meta – the parent company of Facebook, Instagram and WhatsApp – which uses hashing technology on its platforms to detect known material. The company made 27m reports of child sexual exploitation and abuse in 2022 to the National Center for Missing and Exploited Children, while Apple made just 234.

Where something is deemed to not be technically feasible, eSafety has said the standard would require other measures, including having clear and identifiable user reporting mechanisms in place, and detecting patterns in the behaviour of users – not including reviewing encrypted communications.

The draft standards are open for consultation until 21 December and the standards are due to be enacted by April next year.

Samantha Floreani, program lead at Digital Rights Watch, said the group remained concerned about the methods eSafety refers to in the draft standards.

“Such approaches have been widely criticised by privacy and security researchers for their questionable effectiveness, risk of false positives, increased vulnerabilities to security threats, and the ability to expand the use of such systems to police other categories of content,” she said.

“On our view, the implementation of such standards would compromise users’ digital security.”

Guardian Australia has sought comment from Apple.

The draft standards also contain clauses aimed at companies using generative artificial intelligence technology in order to prevent AI from generating child sexual exploitation or pro-terror material.

The standards require companies to use lists or hashes or other technology to detect and prevent AI from generating such content, and warn users who are inputting terms associated with child sexual abuse material about the associated risks and criminality.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.