Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Independent UK
The Independent UK
National
Albert Toth

What is the Online Safety Act? Law explained as Khan says bill ‘not fit for purpose’

PA Wire

Support truly
independent journalism

The UK’s online misinformation laws have been criticised in the wake of last week’s far-right riots which were sparked, in part, by false information shared online.

Social media was thought to play a crucial role in allowing rioters to organise and incite violence, with limited moderation to stop them.

The events have brought last year’s Online Safety Act into the spotlight. The large bill, passed only last year, was supposed to give authorities greater powers to crack down on ‘illegal content’ share on the internet. Instead, it is largely dormant as details continue to be hashed out before its laws are implemented.

Sadiq Khan has said the bill is “not fit for purpose,” urging ministers to act “very, very quickly” to review it. The London mayor added: “the way the algorithms work, the way that misinformation can spread very quickly and disinformation … that’s a cause to be concerned, we’ve seen a direct consequence of this.”

His comments come as the threat of far-right violence has seemingly dissipated for now. Planned action failed to materialise on Wednesday as anti-racist demonstrators outnumbered the few would-be rioters who did turn up in cities all over the UK.

Following the week of tension, the UK’s online laws will no doubt continue to be subject to debate. Here’s everything you need to know about the Online Safety Act:

What does the Online Safety Act do?

Passed in October 2023, the Online Safety Act is designed to make the internet “safer,” especially for children. A central element of the bill is the new duties it places upon social media firms, and the powers it gives Ofcom to enforce them.

The Department for Science, Innovation, and Technology (DSIT) says the bill will make the UK “the safest place in the world to be a child online.” It will require platforms to prevent children from accessing harmful or age-inappropriate content online.

The bill also names Ofcom as the UK’s independent regulator of Online Safety. This will give the telecoms watchdog new powers to enforce the rules laid out in the bill.

A number of new criminal offences are laid out as part of the act, cyberflashing, intimate image abuse (revenge porn), and epilepsy trolling. The most relevant offences to what the country has seen in recent days are “threatening communications,” and “sending false information intended to cause non-trivial harm.”

The act also now requires social media firms to take action against illegal content and activity. Both “racially or religiously aggravated public order offences” and “inciting violence” are included as types of illegal content.

How will the act be enforced?

Ofcom will be able to enforce the rules laid out in the Online Safety Act in a number of ways. Platforms will need to begin providing evidence to the watchdog of how they are meeting the set requirements. Ofcom will then evaluate and monitor these companies, before deciding to take action for non-compliance.

Companies can be fined up to £18 million or 10 per cent of their revenue worldwide (whichever amounts to more). Criminal action can even be taken against senior managers who fail to ensure information requests from Ofcom are fulfilled.

Specifically on child-related offences, the watchdog will be able to hold companies and managers criminally liable for non-compliance.

And in “extreme” cases, Ofcom will be able to require internet providers and advertisers to stop working with platforms, essentially banning them from operating in the country. This would be subject to agreement from the courts.

These powers raise questions about large social media firms operating in the UK, such as Twitter and Telegram, which have both been criticised in recent days for a lax approach to moderation on their platforms.

When will the act come into action?

This has been a key question and criticism since the Online Safety Act was passed. The DSIT says that duties relating to illegal content will come into effect in “early 2025.” Meanwhile, in response to last week’s riots, Ofcom says they will be enforced from “later this year.”

Many experts have been critical of the consultation time between the bill’s passing and the enforcement of new powers, with several still casting doubt on the given timeline. Professor Matthew Feldman, a specialist on right-wing extremism who teaches at the University of York, says implementation will still be taking place “into 2026.”

He also says he is “not convinced” the bill’s scope is wide enough, arguing “there is a debate to be had about whether the bill, as stands, is sufficient for the scale and scope of the events we’ve seen.”

For their part, Ofcom has penned an open letter to online service providers operating in the UK, saying there is no need to “wait” for the new duties to come into effect before making “sites and apps safer for users.”

Ofcom’s Group Director for Online Safety, Gill Whitehead, said: “We expect continued engagement with companies over this period to understand the specific issues they face, and we welcome the proactive approaches that have been deployed by some services in relation to these acts of violence across the UK.”

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.