Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Guardian - UK
The Guardian - UK
Technology
Alex Hern and Dan Milmo

TechScape: Could this be the end of trolls and cyberflashers?

The online safety bill places a duty of care on tech companies.
The online safety bill places a duty of care on tech companies. Photograph: Ossi Lehtonen/REX

The final version of the UK’s online safety bill, which was presented before parliament last week is likely to face a fair few amendments as it passes through the legislative process. But the legislative draft gives us our first real look beyond the government’s spin at what aspects of UK law are about to change.

The bill places a duty of care on tech firms to protect users from harmful content. That duty of care is broadly split into three parts: to limit the spread of illegal content such as child sexual abuse images, terrorist material and that encouraging suicide; ensuring children are not exposed to harmful or inappropriate content; and, for the big platforms like Facebook, Twitter and TikTok, protecting adults from legal but harmful content (such as cyberbullying and eating disorder-related material). The legislation will be overseen by Ofcom, the communications regulator, which will have the power to impose fines of £18m or 10% of a company’s global turnover for breaches of the act. In extreme cases, it can also block websites or apps. Here is a quick precis of the bill, which is expected to become law by the end of the year.

Illegal content

The bill applies to companies that host user-generated content (so that’s everything from tweets to gym sessions on YouTube) and it requires every organisation within its scope to have in place systems and processes that take down illegal content as soon as they see it. There is also a list of “priority” harms that must be tackled proactively so users don’t get to view them. Those harms include money laundering and human trafficking (alongside the terrorist and child sexual abuse content). The bill also applies to search engines like Google.

Legal but harmful content

This is the most controversial element of the bill. Removing “legal but harmful” – nasty but not criminal – content is a duty that applies to so-called “category 1” services, which are the big hitters like Facebook, Instagram, Twitter, YouTube and TikTok. The legal but harmful clause has alarmed civil liberties campaigners who warn that it will be a censors’ charter: content that resides in a grey area, like satire, could be subject to widespread takedowns.

The bill requires that category 1 companies carry out risk assessments detailing the legal but harmful content that could appear on their services (abuse of footballers on Twitter; Russian disinformation on Facebook). They will then have to set out how they will deal with it in their terms and conditions, and state whether they intend to limit, remove or allow that sort of content. So, as a user, you know what you’re in for when you log on.

But politicians are also going to have a say on what is unacceptable. These are the priority categories of legal but harmful content. Companies must state clearly what their policy is on these areas in their terms and conditions. These categories will be outlined in secondary legislation and subject to approval by both houses of parliament. In its press release announcing the bill, the government referred to “self-harm, harassment and eating disorders” as being the types of content that platforms will have to tackle. Nadine Dorries, the culture secretary, says this will take away the power of censorship from tech execs (it just ends up with the government instead).

Child protection

The child-protecting elements of the bill have been talked up strongly by Dorries, underpinned by the duty of care. The measures include requiring pornography sites to carry out age checks, which privacy campaigners fear could create a database of the nation’s sexual habits, or even become a backdoor to digital ID system for accessing the internet. However, supporters of the measure argue that the impact of watching pornography on teenagers is such that action has to be taken.

Companies will also be required to report to the National Crime Agency any child sexual exploitation and abuse content they detect on their platforms. But the reporting requirement goes further than many are comfortable with: the bill contains provisions that could be used to require companies to actively monitor private chats for such content. And those provisions contain no exception for messages, such as those on WhatsApp, that are “end-to-end encrypted”, meaning the platform cannot see their contents. Critics fear the rules could be an effective ban on the technology, ending a five-year battle from the government.

Freedom of press and expression

Under the terms of the bill, “news publisher” content has protections, as does “journalistic” content. News websites such as the Guardian, BBC and MailOnline are not covered by the bill (so do not have to follow its guidelines) while below-the-line comments are protected, as well as news content re-posted on social media platforms. The wording of the bill doesn’t give news content a free pass, however. For instance, the bill requires tech firms to “ensure that the importance of the free expression of content of democratic importance is taken into account” when making decisions about “how to treat such content (especially decisions about whether to take it down or restrict users’ access to it)”. The same wording is used for journalistic content and implies that content takedowns could still happen.

The press release accompanying the bill appears to acknowledge concern among news publishers about the extent of these protections, stating that: “Ministers will also continue to consider how to ensure platforms do not remove content from recognised media outlets.” Dorries has said she intends to introduce amendments to the bill ensuring that news publisher and journalistic content cannot be taken down until an appeal against a removal decision has taken place.

Criminal charges

The government has reduced a grace period for UK-wide criminal measures against tech execs by 22 months to just two months (ie there was going to be two year wait, from the moment the bill became law, before the offences were introduced). Tech execs will face up to two years if: they hamper an Ofcom investigation by failing to provide information in an “accurate and timely manner”; tamper with information being sought by Ofcom; obstruct an Ofcom audit or inspection, or provide false information, or if they provide false information at an interview. Companies can also be prosecuted under the last three of those of offences.

There are also additional criminal offences for users in England and Wales: cyberflashing (sending an unsolicited sexual image); sending or posting a message that conveys a threat of serious harm; sending a communication with the intent of causing psychological harm or serious emotional distress (this covers Twitter pile-ons); and deliberately sending a false message with the intention of causing harm (deliberately pushing harmful and ineffective Covid treatments).

If you want to read the complete version of the newsletter please subscribe to receive TechScape in your inbox every Wednesday.

This article was amended shortly after publication to make clear the requirements on companies around legal but harmful content.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.