Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Guardian - UK
The Guardian - UK
National
Dan Milmo and Ben Quinn

How false online claims about Southport knife attack spread so rapidly

Flowers and tributes are laid outside the Atkinson art centre in Southport, Merseyside, after three children were fatally stabbed at a holiday club.
Flowers and tributes were laid outside the Atkinson art centre in Southport, Merseyside, after three children were fatally stabbed at a holiday club. Photograph: James Speakman/PA

Speculation about the identity of the suspect behind the Southport stabbings, in which three children died, has been rampant on social media.

The home secretary, Yvette Cooper, has urged the public to avoid “unhelpful” speculation and has said social media companies “need to take some responsibility” for content related to the attack.

Here we answer some questions about how false claims about the incident in Merseyside spread online so rapidly.

How did misleading content spread?

Soon after the attack, social media accounts began spreading inflammatory speculation. The platform Xplayed a key role as the events unfolded.

An account called Europe Invasion, known to publish anti-immigrant and Islamophobic content, posted on X at 1.49pm, soon after news of the attack emerged, that the suspect was “alleged to be a Muslim immigrant” – a claim that was false. The post has since been viewed 6.7m times.

Joe Ondrak, a senior analyst at Logically, a UK company that monitors disinformation, said the Muslim immigrant claim was amplified across social media as a result.

“That particular phrase went around all the far-right influencers and channels,” he added.

Dr Marc Owen Jones, an associate professor at Hamad Bin Khalifa university in Qatar, reported there were at least 27m impressions of posts on X that stated or speculated the suspected attacker was Muslim, a migrant, a refugee or a foreigner. He said in a thread on X that there was a “clear” attempt by “rightwing influencers and grifters” to push an anti-immigrant and xenophobic agenda.

Prominent rightwing figures played a role in spreading false claims. Tommy Robinson, a British far-right activist whose real name is Stephen Yaxley-Lennon, said in a post on X that rioters in Southport were “justified in their anger” while Andrew Tate, a misogynist influencer, claimed the attacker was an “illegal migrant” and told people to “wake up”.

X, which is owned by Elon Musk, reinstated Robinson and Tate’s previously banned accounts after buying the platform in 2022. X has been approached for comment.

Where did a false name come from, and how was it spread online?

The sharing of a false name for the suspect became the catalyst for the most potent disinformation.

It is unclear what the first source was for the fabricated name, which appeared to have been chosen to reflect Islamophobic tropes, but it was amplified by a faux news website calling itself Channel 3 News Now. That site, which does not have any named personnel and features a mix of emotive US and UK news and sport stories, responded to an email from the Guardian about the report to say: “We deeply regret any confusion or inconvenience this may have caused.”

Its post on X and subsequent “coverage” appeared to have amplified the original piece of disinformation, before it was widely shared over 48 hours by far-right activists, conspiracy theorists and self styled “influencers” on platforms such as TikTok. Searches for the fake name, which spiked massively on Monday, have now faded away.

How do bot accounts work and are they behind some of the content?

Most artificial influence campaigns use AI models used for chatbots to write and send tweets, according to Marcus Beard, a former Downing Street official who headed No 10’s response to countering conspiracy theories during the Covid crisis.

This approach is now far too advanced to give away telltale signs on a tweet-by-tweet basis, he adds.

That said, Beard and other analysts point out there are definitely poor-quality bots tweeting about the Southport incident, though he also cautions that this is in the main “engagement farming”. Such bots have been around since 2010.

Ava Lee of the campaign group Global Witness said: “Accounts that appear to be bots are using the horrific events in Southport to fuel division at a time when the community is calling for calm.”

Is there evidence of involvement by hostile states in the spread of the disinformation?

Experts are divided over whether assets controlled by states such as Russia have been involved in amplifying the claims.

Some point out that the oldest clips on a YouTube account run by Channel 3 are of what appear to show car racing in Russia 12 years ago, complete with Russian captions.

However, Beard said the evidence for any concerted Russian involvement was flimsy, adding: “Russian state tactics tend to be more focused on sowing discord and planting multiple narratives to confuse and divide. This feels a little too one-sided for that.”

Stephanie Lamy, a disinformation strategies analyst, said: “Channel 3 looks like a traffic farming website that monetises content through advertising. Content is probably generated by AI. Data is harvested from social media and tradition media, without citing sources. It is therefore vulnerable to harvesting ‘bad’ data.”

What can be done to prevent the spread of inflammatory content?

The UK’s Online Safety Act 2023 contains provisions requiring social media platforms to tackle illegal content such as threats against people of a particular race, religion, sex or sexual orientation and also to protect users from an offence known as “false communications”. Large tech firms such as the major social media companies will also be required to apply their terms of service consistently, including guidelines that prohibit the spread of false information (misinformation is the term for unintentionally false information, while disinformation is deliberately misleading). However, these codes will not start to be implemented until the end of the year.

Under existing law in England and Wales, it is already an offence to send threatening, abusive or offensive messages on social media. However, the immediate taking down of dangerous and misleading content relies on social media companies enforcing their guidelines.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.