With Australia’s teen social media ban going full steam ahead this year, the requirements on social media companies to enforce restrictions are still being hashed out.
Thirty-four companies have said they want to take part in the government’s trial of the age estimation and verification methods. These methods are how social media companies will figure out the ages of their users to prevent under-16s from having accounts on their services. A report from this trial is due in the middle of the year and will inform the legal guidance to social media companies about the ban requirements.
While there’s been a lot of attention paid to some of the newer (and divisive) technologies, such as facial analysis and Digital ID, it’s likely that another method, age inference, will be a significant part of the mix.
Age inference uses existing data like behavioural or online signals to “infer” a user’s age. This can be simple, like assuming that someone who’s been an active Facebook user since 2008 is an adult, to more sophisticated machine learning inferences like those apparently used by Google. Essentially, it’s like a detective looking at all the clues to judge whether the person on the other side of the screen is an adult or not.
An advantage of this method is it is non-invasive. It doesn’t ask for anything, only reviewing the information that’s on hand. The flip side is it is much less accurate than other methods — after all, even the best guess will be wrong compared to just asking someone to show their ID.
If your head is spinning with technical mumbo-jumbo, let’s look at how this technology can work in practice by reviewing an example that was accidentally brought to light last year.
In May 2024, X account @DiscordPreviews shared that it had found proof that popular chat platform Discord was algorithmically determining the age and gender of its users behind the scenes.
While Discord asks users for their age when signing up, the application was also using cues to make these demographic statistical predictions, which users were unaware of. For example, a user might be given a 74% chance that they were aged 18-24 and a 91% chance that they were male.
This led to some curious quirks. Another X user noted this data could be used to graph Discord’s changing prediction for a user’s age over time, becoming more confident that they were 25-34 rather than 18-24 over time, based on their usage. Discord did not immediately return a request for comment.
This finding gives rare insight into how age inference systems work: they can be inaccurate and inscrutable but are already in use and relatively simple to implement.
The government’s age assurance trial tender hinted at a sliding scale of approaches to figuring out a user’s age based on the risk to the user. For example, it is plausible the government might decide it doesn’t mind if social media platforms use a less reliable age assurance method to access their platforms compared with buying booze from an online store. The prime minister has said as much, admitting the ban won’t be perfect.
That’s where something like age inference might end up being a large part of how social media companies try to restrict teens. Discord’s example shows tech companies are already capable and actively using cues to determine users’ ages. The question is whether these methods will work well enough to keep out enough teens to ensure the teen social media ban actually works.
Have something to say about this article? Write to us at letters@crikey.com.au. Please include your full name to be considered for publication in Crikey’s Your Say. We reserve the right to edit for length and clarity.