
Online safety regulator Ofcom has launched an enforcement programme to assess whether tech firms are becoming compliant with the Online Safety Act.
The regulator said platforms had until March 31 to send Ofcom their first risk assessment, setting out how likely it is users could encounter illegal content on their service.
Ofcom said the accompanying risk assessments from platforms were vital to help sites understand how harm could take place on their services so they could put in place appropriate safety measures.
Under the Online Safety Act, social media platforms are required to follow new codes of practice on a range of topics, including stopping illegal content from appearing on their sites – with fines of up to 10% of turnover or £18 million, whichever is greater, for those who fail to comply.
In the most serious cases, Ofcom can also apply to a court to have a site blocked in the UK.
The codes of practice are being introduced in stages, staggering the introduction of new online safety rules.
The illegal content codes relate to content such as around child sexual exploitation and abuse, terrorism, hate crimes, content encouraging or assisting suicide, and fraud.
The regulator said it was ready to enforce against non-compliance with the Online Safety Act and would use the risk assessments gathered to identify gaps and drive improvements, as well as use the data gathered to further develop its codes of practice.
Suzanne Cater, Ofcom’s enforcement director, said the risk assessments were a “vital first step” in how platforms better-protect users and make their sites “safer by design”.
She said: “We’ve identified a number of online services that may present particular risks of harm to UK users from illegal content – including large platforms as well as smaller sites – and are requiring them to provide their illegal harms risk assessment to us this month.
“We’re ready to take swift action against any provider who fails to comply.”
The online safety rules have been criticised by some campaigners, who have argued the codes of practice would only create a narrow “checklist” that platforms needed to follow in order to comply with the laws, and could see some do less on safety than they are currently.