A third of social media users aged between eight and 17 have the online age of an adult because they sign up with a false date of birth, according to new research.
The fake age issue means that young users in the UK are at greater risk of being exposed to harmful or adult content, as platforms presume they are older than they in fact are.
The majority of children aged between eight and 17 who use social media have their own profile on at least one of the main platforms, according to research commissioned by Ofcom, the communications watchdog.
“When a child self-declares a false age to gain access to social media or online games, as they get older, so does their claimed user age. This means they could be placed at greater risk of encountering age-inappropriate or harmful content online,” said Ofcom.
The regulator added that once a user reaches 16 or 18, some platforms introduce features not available to younger users such as direct messaging or the ability to see adult content.
The study covered six of the leading platforms – Facebook, Instagram, TikTok, Snapchat, Twitter and YouTube – all of which have age limits of 13. Its findings suggested that 32% of children aged eight to 17 with a social media profile have a user age of 18 or over, while nearly half of children in the same age bracket have a user age of 16 or over.
The most popular site among all eight- to 17-year-olds was YouTube, followed by TikTok and then Instagram. The majority of respondents had set up their account profile themselves.
The online safety bill, which is due to resume its progress through parliament before Christmas, imposes a duty of care to protect children from harmful content. The inquest into the death of Molly Russell, a 14-year-old who took her own life in 2017 after viewing harmful content on platforms including Instagram and Pinterest, found she signed up for an Instagram account at the age of 12.
One expert in internet safety described the study, based on a survey of more than 1,000 young social media users by Yonder Consulting, as a signal from Ofcom to tech firms that it knows where there are flaws in their operations.
“This is a warning shot to platforms that Ofcom knows what is on going with these services,” said William Perrin, a trustee of the Carnegie UK Trust.
Under the online safety bill, platforms are required to prevent children from accessing harmful content – such as suicide and self-harm material – with systems that could include rigorous age checks. As part of a risk assessment process required under the bill, Ofcom will then decide whether each platform’s approach to age checking is thorough enough.
A spokesperson for the Molly Rose Foundation, set up by Molly Russell’s family, said: “Effective regulation through the online safety bill cannot come too soon. The Ofcom research shows that by allowing children from the age of eight on to their platforms, social media providers fail in a basic duty of care. They have proved unable to control their platforms’ capability to connect our children with distressing and harmful content, resulting in tragic outcomes.”
The spokesperson added: “Had the appropriate age checks been carried out in Molly’s case, she might have been spared a whole year of exposure to harmful content on Instagram.”
Ofcom also published research showing that children prefer a “self-declaration” method of age assurance for social media platforms while parents often preferred “parental confirmation”, where they confirm an account holder’s age.
Meta, the owner of Instagram and Facebook, uses artificial intelligence to find underage users. YouTube, owned by Google, allows children under 13 to open accounts with parent or guardian supervision, as well as offering the separate YouTube Kids platform, while TikTok has an age gate that requires people to fill in their complete date of birth.
A Snapchat spokesperson said: “Age verification is an industry-wide challenge and we are in ongoing conversations with other companies and policymakers about consistent and effective solutions.”