Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Digital Camera World
Digital Camera World
Leonie Helm

"Horrific and traumatizing videos" –the dark world of the social media moderators

Facebook and Instagram get subscription service.

Artificial intelligence is taking a strong role in many technological industries, but it is still the job of humans to review and delete disturbing and harmful content across social media platforms.

Beheadings, child abuse and murders all end up in the inboxes of a global network of moderators.

Online safety concerns across the internet and social media channels have become a matter of increasing importance, and companies are under increasing pressure to remove harmful photos and videos faster.

There has been a lot of research and money allocated to tech solutions to fix these issues earlier, such as the use of AI, but it currently remains the task of dedicated humans.

Often employed by third-party companies, these moderators work on content posted directly to big social networks including TikTok, Facebook and Instagram.

Zoe Kleinman, technology editor at the BBC has produced a series called The Moderators for Radio 4 and BBC Sounds, and she described the stories of these moderators as “harrowing.” She also added that these moderators were largely living in East Africa.

“Some of what we recorded was too brutal to broadcast. Sometimes my producer Tom Woolfenden and I would finish a recording and just sit in silence,” she said.

(Image credit: Future)

One of the moderators they spoke to was called Mojez, a former Nairobi-based TikTok moderator.

“If you take your phone and then go to TikTok, you will see a lot of activities, dancing, you know, happy things. But in the background, I personally was moderating, in the hundreds, horrific and traumatizing videos.”

“I took it upon myself. Let my mental health take the punch so that general users can continue going about their activities on the platform.”

The BBC reports that there are currently multiple ongoing legal cases that claim that this line of work has destroyed the mental health of such moderators, and some of the former East African moderators have joined forces to form a union.

It adds that in 2020, Meta agreed to pay a settlement of $52 million (around £41 million / AU$81 million) to moderators due to mental health issues caused by their roles.

One moderator explained to the BBC that due to the child abuse he had witnessed, he found it difficult to communicate and interact with his wife and children.

This is a role screaming out to be taken over by AI but, as is becoming increasingly clear, we do not yet have full control over AI technology, and it’s highly likely that disturbing content would slip through the gaps and end up circulating the globe.

TikTok has said that AI technology moderates the content before humans, but the amount that still makes it through suggests it is in no way sophisticated enough to take on the role entirely.

(Image credit: Future)

Despite the trauma caused by their work, these human moderators do not want AI to take over this role. Kleinman says that they saw themselves as a vital emergency service.

“Not even one second was wasted,” says someone that the team called David. He asked to remain anonymous, but he had worked on material that was used to train the viral AI chatbot ChatGPT, so that it was programmed not to regurgitate horrific material.

However, it’s possible this tech might one day take over David’s job. Former head of trust and safety at OpenAI, creator of ChatGPT said his team built a moderation tool based on the chatbot’s text, which was able to identify dangerous content with an accuracy rating of 90%.

“When I sort of fully realized, ‘Oh, this is gonna work,’ I honestly choked up a little bit,” he says. “[AI tools] don't get bored. And they don't get tired and they don't get shocked… they are indefatigable.”

However, some experts have highlighted more issues with AI and moderation of content.

“I think it’s problematic,” says Dr Paul Reilly, senior lecturer in media and democracy at the University of Glasgow.

“Clearly AI can be a quite blunt, binary way of moderating content. “It can lead to over-blocking freedom of speech issues, and of course it may miss nuance human moderators would be able to identify. Human moderation is essential to platforms. The problem is there’s not enough of them, and the job is incredibly harmful to those who do it.”

The Moderators can be listened to on BBC Sounds.

(Image credit: Future)

Take a look at our guides to the best AI image generators, the best photo culling software.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.