Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Guardian - UK
The Guardian - UK
Politics
Dan Milmo and Hibaq Farah

‘A torrent of abuse’: victims pin hopes on UK online safety bill

Online safety illustration
The online safety bill will impose a duty of care on tech firms to protect users from harmful content, or face large fines. Illustration: Guardian Design

The online safety bill is a landmark piece of legislation that aims to stop damage to people online, ranging from racist tweets to the harmful content sent by powerful algorithms.

The revised bill, published on Thursday, will impose a duty of care on tech firms to protect users from harmful content, or face large fines from the communications watchdog.

The Guardian has spoken to people who have suffered from the kind of damage that the bill is trying to prevent and penalise.

Gina Miller, 56, victim of racist and sexist abuse

Gina Miller
Gina Miller. Photograph: Henry Nicholls/Reuters

Miller is clear about the motivation for the online hatred she has suffered over the years: “It was a torrent of abuse that came from my being a woman of colour.”

The transparency campaigner says the abuse began in 2015 when she highlighted concerns over the financial and charity sectors. This escalated when she spearheaded successful court cases against the government’s actions over Brexit.

She says Facebook and Twitter were the worst platforms for hosting racist messages. “The messages that were posted ranged from things like saying ‘Go back home you dirty foreigner’ to ‘This woman should be killed’,” says Miller, who was born in British Guiana, now Guyana, to parents of Indian descent.

Miller welcomes the fact that the bill will criminalise digital “pile-ons” where victims are subjected to abuse online from multiple people simultaneously. It will also ensure that big tech platforms prioritise tackling specific types of “legal but harmful” content, which is expected to include racist abuse.

Ian Russell, 58, father of Molly Russell

Ian Russell
Ian Russell. Photograph: Ken McKay/ITV/Rex

Molly, 14, from Harrow in north-west London, viewed content on Instagram and other social media platforms linked to anxiety, depression, self-harm and suicide before taking her own life in November 2017.

Her father, Ian, says social media platforms have been allowed to regulate themselves, a laissez-faire approach that failed his family. “Social media has been allowed to evolve in a self-regulated manner. That clearly does not work, otherwise tragedies like Molly’s would not have happened.”

Ian says social media has shown its “good side” recently with its role in the coronavirus pandemic and the Ukraine conflict, but young people need greater protection when using platforms. “We need regulation so that when young people use social media, which they should do, because it does tremendous good, that they are protected from the harm it can cause.”

The legislation, which applies to tech firms that produce user-generated content as well as search engines, will require companies to put in place systems that spot illegal content, such as posts that promote or facilitating suicide. Platforms will also need to ensure that their algorithms, which curate what a user sees, do not target vulnerable users with inappropriate content.

Jill, 75, online advert victim

Scam adverts will be included in the scope of the bill, meaning that the largest social media platforms will be required to prevent paid-for fraudulent adverts appearing on their sites. This includes adverts carrying fake celebrity endorsements.

It will be too late for Jill, a retired psychotherapist living in Cambridgeshire. Jill, who asked for her surname to be withheld, lost more than £30,000 in 2020 after clicking on an advert on Facebook for a cryptocurrency investment carrying a fake “endorsement” from the Dragons’ Den star Deborah Meaden.

“It’s terribly distressing and it all started with one small advert on Facebook. They need to police these adverts,” she says. “It affected my family because they were very upset and angry. It cast a cloud over us for a couple of years.”

Meaden has said in a statement on her website that she has no association with – or made investments in – bitcoin trading platforms. “I have taken steps to get the unauthorised material removed and am taking appropriate action on the individuals and/or companies who have decided to scam people in this way,” she wrote.

Katie Scott, 24, eating disorder survivor

Young people looking at phones
The bill will impose a duty of care on tech firms to protect users. Photograph: Justin Lambert/Getty Images

Scott says she encountered online content as a teenager that encouraged her eating disorder, including people posting ideas for extreme diets.

“From the ages of 14-18, I would access a lot of harmful content on Tumblr and Instagram. This content was often referred to as pro-anorexia content, which was essentially a community of people encouraging each other to participate in dangerous behaviour,” says Scott, from Reading. “Viewing this content online made me feel less aware of how dangerous my behaviour was. It felt like an external manifestation of the disorder that was already in my head.”

Scott says there needs to be a dedicated effort to prevent this content from being so widely accessible. She says similar content still exists on social media platforms such as TikTok and Instagram. “There needs to be more done than just shutting down and reporting accounts that promote eating disorders. The accounts need to be identified sooner, with more of an emphasis on keeping on top of new hashtags or spaces developing.”

The list of legal but harmful content that the government expects tech firms to tackle will be set out in secondary legislation. But the press release accompanying the revised bill indicates that content relating to self-harm and eating disorders will be among the areas covered.

Frida, 21, grooming survivor

Frida says social media platforms left her vulnerable to being groomed online at 13, after a man in his 30s contacted her on Facebook. The resulting online relationship lasted for years, an experience that left Frida with long-term depression.

“At the time, I was really miserable at school. I was getting bullied and didn’t really have any friends, so I didn’t have anything to lose,” says Frida, whose name has been changed.

She says there were few safeguarding regulations at Facebook to protect her from the abuse. “There were increased risks with things like end-to-end encryption, for example, or the fact that my abuser was able to so easily add and message me, and cross-platform risk.” She says the initial contact on Facebook soon moved to messaging on WhatsApp – which is encrypted, meaning the messages can be viewed only by the sender and recipient.

“So much of what I interacted with on Facebook left me at risk,” says Frida, who has campaigned with the NSPCC, the child protection charity, to strengthen the bill. “What I’d like to see in this online safety bill is more focus on risk and what increases risk. I’d like to see a culture of safety by design being implemented on apps like Facebook.”

The bill’s duty of care is split into several parts, including a requirement to protect children from illegal activity such as grooming. Tech companies will have to carry out risk assessments detailing how abuse could occur on their platforms and how to prevent it. These risk assessments will be overseen by Ofcom, the communications watchdog, which has been charged with implementing the legislation.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.