Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Guardian - UK
The Guardian - UK
World
Dan Milmo Global technology editor

Distressing Annecy footage put social media’s self-regulation to the test

Annecy residents gather to support the victims and their families in Annecy on Sunday.
Annecy residents gather to support the victims and their families in Annecy on Sunday. Photograph: Jean-Philippe Ksiazek/AFP/Getty Images

Most social media users know to self-regulate when violent events such as terror attacks occur: don’t share distressing footage; don’t spread unfounded rumours.

But in the aftermath of the Annecy attack some inevitably acted without restraint.

Bystander footage of a man attacking children in a park in south-east France appeared online after the attack on Thursday and was still available, on Twitter and TikTok, on Friday.

The distressing footage has been used by TV networks but is heavily edited. The raw versions seen by the Guardian show the attacker dodging a member of the public and running around the playground before appearing to stab a toddler in a pushchair.

Twitter, as is now company policy under Elon Musk, did not respond to a request for comment, although of the two clips that the Guardian found on Friday, one had been taken down by Saturday and the other had been placed behind an age-restricted content warning screen.

Both Twitter clips had a total of 50,000 views by Friday.

In a social media world that has been largely self-regulated – although that is now changing – you need to look at the content guidelines for guidance as to whether it should be up there. Twitter’s sensitive media policy states people often use the platform to “show what’s happening in the world” and this can include “graphic content” that, if you choose to share it, requires your account to be marked as “sensitive”, which places content behind a warning screen.

In the case of the Annecy clip, it appears to have been deemed “adult” rather than “sensitive” content. Either way, it was at least made more difficult to view and had either been reported to Twitter and acted upon, or caught by its moderation systems – but not before tens of thousands of views.

The TikTok video had been uploaded by a news account for Azteca, a Mexican broadcaster with 4.1 million followers. It showed the same unedited footage, including the stabbings, but was accompanied by a jarring soundtrack labelled as “Drama music suitable for battle scenes”. TikTok took down the video once it had been notified by the Guardian but not before it had been viewed more than 5m times.

TikTok said: “Our teams have been responding to this tragic event by working rapidly to identify and remove violative content. We do not allow disturbing or extremely violent content on TikTok.”

TikTok’s guidelines state that the platform does not allow “gory, gruesome, disturbing, or extremely violent content”. There is a public interest exception for otherwise violative content that is deemed to “inform, inspire, or educate” – although the Annecy footage clearly does not qualify under that heading.

Elsewhere, the owner of Facebook and Instagram, Meta, said any copies of the Annecy footage would be removed under its “dangerous organisations and individuals” policy that bars content showing “multiple-victim violence”. YouTube has removed or age-restricted some bystander footage of the attack under policies that prohibit content intended to shock or disgust users.

If tech platforms would cite this as an example of self-regulation working – albeit, in the case of Twitter and TikTok, imperfectly – it is too late. In the UK, the online safety bill, due to become law this year, platforms face censure if they do not enforce their terms and conditions, which includes their guidelines on violent content.

There is also a duty of care to shield children from harmful content, which could include the Annecy footage. Lorna Woods, a professor of internet law at the University of Essex, added that there was “nothing to stop” platforms changing their terms of service and what sort of content they allowed.

In the EU, the digital services act requires platforms to have measures in place to mitigate the risk of harmful content.

Both acts carry the threat of substantial fines, and in extreme cases suspension, for breaches.

Social media platforms are now in a world where, if their policies restrict unedited Annecy-style footage, they have to follow words with actions. Regulators will be watching.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.