As the UK pressures tech companies to open up end-to-end encrypted communications in the name of child safety, Australia is looking at following suit.
Law enforcement and child safety groups around the world say the rise in end-to-end encrypted communications apps makes it harder to track and stop the spread of online child abuse, with chat logs and filesharing going dark when communications are encrypted.
The Australian Criminal Intelligence Commission said in a parliamentary submission last year that plans by Meta to make Facebook Messenger and Instagram end-to-end encrypted messaging raised concerns that existing technologies the platforms use to detect child abuse material won’t work.
“[National Centre for Missing & Exploited Children] NCMEC has estimated that Meta’s implementation of end-to-end encryption across all its major platforms will reduce the number of [child sexual abuse material] CSAM reports it receives by more than 50%.”
The commission expressed frustration that the debate had been framed as protecting children versus protecting privacy.
“The adoption of end-to-end encryption … will likely provide a haven for [child sexual abuse material] offending, rather than preventing it.”
But tech companies have been pushing back against what they say are calls to weaken end-to-end encrypted services. Both WhatsApp – owned by Meta - and Signal sounded the alarm in the UK in April this year about the online safety bill, which they said would empower the industry regulator “to try to force the proactive scanning of private messages on end-to-end encrypted communication services, nullifying the purpose of end-to-end encryption as a result and compromising the privacy of all users”.
Companies argue that what would be done in the name of protecting children would ultimately be used in authoritarian states to reduce the efficacy of encrypted messaging to crack down on dissent.
In Australia, the push is being led on a number of fronts. The eSafety commissioner is forging ahead with similar powers to the communications regulator in the UK, Ofcom. In June, Julie Inman Grant rejected two industry-designed regulatory codes because they didn’t require cloud storage services, email or encrypted messaging services to detect child abuse material.
“For eSafety, these and other basic requirements are non-negotiable, and while we don’t take this decision lightly, we feel that moving to industry standards is the right one to protect the Australian community,” the commissioner said at the time.
Empowered by Australia’s own online safety legislation, the commissioner’s office is now developing mandatory codes tech companies must comply with or face up to $700,000 daily fines for breaches.
Rys Farthing, policy director at Reset Australia, says governments around the world are grappling with how to strike a balance between protecting children and respecting people’s right to privacy. But Farthing says Australia’s approach of trying to regulate it via initially industry-designed codes leaves much to be desired.
“In Australia, the debate seems to have emerged through our online safety codes, which were instead drafted first by industry representatives, and I doubt the public really heard much about,” she says.
“The whole process around the drafting of our online safety codes was deeply problematic, and our approach of letting industry have first go when it comes to critical policy issues is just so sloppy.”
As those codes are developed, the tech platforms are also facing pressure from politicians to either refrain from offering end-to-end encrypted communications or to make them accessible to law enforcement.
In parliamentary hearings on law enforcement’s response to child exploitation, Google, Meta and Twitter (now known as X under Elon Musk’s ownership) all faced questions about their encryption plans.
Meta’s head of public policy in Australia and New Zealand, Josh Machin, says the company has been working very carefully to develop its encryption plans without compromising safety.
“The work that we’re doing to detect behavioural signals on our services is really effective here,” he said in February. “A user who is part of a sophisticated criminal enterprise has different behaviour on encrypted services than an ordinary user.
“Even if we are not able to see the content, there’s some pretty effective work we’ve been able to do to analyse the metadata based on the behaviour of the individuals involved.”
Google’s manager of child safety, Emily Cashman Kirstein, told the committee earlier this month that Google too looks for “signals around the message” to detect such activity.
In Google’s storage systems, the company uses hash matching, a digital signature that can be compared with and matched against known sets of CSAM content. Artificial intelligence is then used to detect and flag content hosted on its systems that might match that content. There is a confirmation process including human review before further action is taken, she said.
“So, fundamentally, end-to-end encryption is going to pose a significant problem for identification for you and for us,” Labor MP Louise Miller-Frost asked Cashman Kirstein at the committee hearing.
“It is a challenge,” Cashman Kirstein replied. “Our hope is that we would be able to provide, through legal process, as much information as possible around things, whether that’s metadata or signals.”