Tech bosses face the threat of prosecution and up to two years in jail if they hamper investigations by the communications watchdog from next year, under a wide-ranging overhaul of a landmark online safety bill.
The government has reduced a grace period for criminal prosecution of senior managers by 22 months from two years to just two months, meaning tech bosses could be charged with offences from early next year.
The change was announced as the government publishes a revamped online safety bill, which places a duty of care on social media platforms and search engines to protect users from harmful content. The new measures include:
• New criminal offences in England and Wales covering cyberflashing, taking part in digital “pile-ons” and sending threatening social media posts.
• Tech firms must prevent scam adverts from appearing online.
• Big platforms must tackle specific categories of legal but harmful content, which could include racist abuse and posts linked to eating disorders.
• Sites hosting pornography must carry out age checks on people trying to access their content.
The updated legislation introduced to parliament on Thursday confirms, and brings forward, UK-wide proposals for a fine or jail for senior managers who fail to ensure “accurate and timely” responses to information requests from regulator Ofcom.
It introduces a further two new criminal offences that apply to companies and employees: tampering with information requested by Ofcom; and obstructing or delaying raids, audits and inspections by the watchdog. A third new criminal offence will apply to employees who provide false information at interviews with the watchdog.
Nadine Dorries, the culture secretary, said tech firms have not been held to account when abuse and criminal behaviour have “run riot” on their platforms. Referring to the algorithms that tailor what users see on social media platforms – which have been heavily criticised during scrutiny of the draft bill – she added: “Given all the risks online, it’s only sensible we ensure similar basic protections for the digital age. If we fail to act, we risk sacrificing the wellbeing and innocence of countless generations of children to the power of unchecked algorithms.”
The legislation’s duty of care applies to internet companies which host user-generated content – such as Twitter, Facebook and TikTok – and search engines such as Google.
It is split into several categories which include: limiting the spread of illegal content such as terrorist material, child sexual abuse images and hate crime; protecting children from harmful content; and for the biggest platforms, protecting adults from legal but harmful content which is likely to include racist abuse and content linked to eating disorders.
The priority categories of legal but harmful content, which tech firms will be required to police, will be set out in secondary legislation. The government argues that this means the definition of harmful content will not be delegated to tech executives. Nonetheless, civil liberties groups are concerned that this will give ministers the power to censor content. On Wednesday the Open Rights Group called the bill an “Orwellian censorship machine”.
Companies that breach the act face fines levied by Ofcom of up to 10% of global turnover – which in the case of Facebook’s parent would be nearly $12bn (£9.2bn) – or £18m, whichever is higher. The watchdog will also have the power to block sites and apps under the bill, which is expected to become law at the end of the year.
Other changes in the bill include giving users on the biggest social media sites the option of blocking anonymous accounts, in a move designed to counter online trolls. Large tech firms will be expected to provide “risk assessments” to Ofcom in which they will detail how their platforms could cause harm to users – including the workings of algorithms – and the systems they have in place to prevent those harms.
Dorries told ITV’s This Morning programme on Wednesday: “It’s the algorithms that cause the harm, so this bill will compel those platforms to expose those algorithms to our regulator so they can pick up where the harm is happening and hold those platforms to account.”
The bill will exempt content from news publishers and journalists from censorship, although Dorries qualified on Wednesday that all journalistic content would be protected “providing it’s legal”. Platforms that seek to remove journalistic content must give notice of its removal and allow for an appeal against the takedown, Dorries added. The bill also contains provisions protecting content of “democratic importance”, which is designed to protect political debate.
The bill’s publication came as Instagram, owned by Mark Zuckerberg’s Meta, launched new tools for parents and guardians to monitor teens’ use of the platform. Starting in the US, the measures include receiving updates on the accounts that teenagers follow and setting time limits on use of the platform. Meta also unveiled new parental controls for its Oculus virtual reality headsets, including giving parents the ability to block access to certain apps. Zuckerberg’s embrace of the metaverse concept – where people interact in a blended virtual and physical world – has triggered warnings that VR will unleash a new wave of problems with policing digital platforms.