Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Guardian - AU
The Guardian - AU
Politics
Josh Taylor

TikTok received more requests to remove child bullying posts than any other social platform in Australia

TikTok received 209 requests in 2022 to remove posts that bullied children and 100 in 2023. The acting eSafety commissioner, Toby Dagg, said earlier this month complaints of cyberbullying on social media from children under 14 had tripled since 2019.
TikTok received 209 requests in 2022 to remove posts that bullied children and 100 in 2023. Australia’s acting eSafety commissioner said complaints of cyberbullying on social media platforms from children under 14 had tripled since 2019. Photograph: Dado Ruvić/Reuters

TikTok received more requests from Australia’s eSafety commissioner to remove posts that bullied children in the last 18 months than any other social media platform.

Reddit received the most reports of people’s images being shared without their consent.

A total of 795 requests by the eSafety commissioner to remove alleged bullying of children from various platforms were made since the beginning of 2022. For TikTok alone, 209 requests were made in 2022, and 100 in 2023.

The Meta-owned Instagram platform followed with 186 requests in 2022, and 84 in 2023. Snapchat was next with 70 and 45, respectively. There were four requests in 2022 to Roblox, and two so far this year.

The data is contained in tables from the office of the eSafety commissioner, released to Senate estimates in response to questions on notice from the Greens senator, David Shoebridge. Not every complaint received by the commissioner’s office results in a removal request being made.

The acting eSafety commissioner, Toby Dagg, said earlier this month complaints of cyberbullying from children under 14 had tripled since 2019.

“We received around 230 cyberbullying complaints in May this year alone and around 100 of these involved children aged eight to 13 experiencing this kind of harm. Nasty comments, offensive pictures or videos, and impersonation accounts are among the most reported issues,” he said.

The office revealed it has made 852 removal requests for image-based abuse since the start of 2022, with 54 in total made to Reddit, which topped the requests last year. Twitter was third with 21 requests last year.

A spokesperson for Reddit said the platform took the issue seriously and site policies prohibit any nonconsensual sharing of intimate or sexually explicit media, adding that 60% of material is caught by automated systems before anyone sees the content.

“Reddit was one of the earliest sites to establish site-wide policies that prohibit this content, and we continue to evolve our policies to ensure the safety of the platform,” the spokesperson said.

Some sites that had requests made for removal were censored on the table because they were predominantly created for the purpose of exploiting victims of image-based abuse, or threatening or harassing users through doxing, the office said.

In May, the commissioner said that more young men had been reporting image-based abuse than women. A total of 1,700 reports in the first quarter of this year – 1,200 of which were people aged 18 to 24, and 90% of which were male.

Meta’s Facebook and Instagram received the most removal requests for the cyber abuse of adults over 2022 and 2023, with 267 and 153 requests respectively. TikTok was not far behind with 117, followed by YouTube at 37.

A spokesperson for Meta said the company heavily invests in safety tools to ensure users have a positive experience.

“Our policies clearly prohibit people from sharing and engaging in online abuse and we will remove this content as soon as we become aware. We also work collaboratively with the eSafety commissioner and have a dedicated reporting channel where the eSafety commissioner can report content directly to us for review,” the spokesperson said.

The vast majority of the removal requests were informal, the eSafety office said, with only 53 formal removal notices issued since the start of last year.

The eSafety commissioner’s office said it submitted 22,000 notifications to the International Association of Internet Hotlines for rapid child exploitation material removal, and 20 class 1 removal notices related to pro-terror or violent extremism material in that time. But only 2% of those notifications or requests were related to content on social media like Facebook, Instagram, TikTok or Twitter, the office said.

TikTok was approached for comment.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.