Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Crikey
Crikey
Comment
Alice Dawkins

It’s algorithms, targeting and ads: The worst online harms stem from the systems that feed content

Digital platforms have been in the crosshairs over the past few weeks for their role in destructive social behaviours, spanning youth crime to partner violence. 

It’s a good thing that government is gradually gaining the confidence to better regulate large digital platforms. Pursuing digital platforms for the harms they cause is not mutually exclusive from complex, desperately needed social policy measures. Nor should scrutinising platforms for distributing harmful or misleading content be seen as at the expense of strengthening the news media environment more broadly.

The reality of the internet’s embeddedness in our lives means these issues are concentric and co-dependent. But there are two defining challenges for the government in its campaign to counter big tech: how risks and harms from digital platforms are defined, and who gets the power to write the rules. 

Online harms and platform risks 

Australia’s online safety framework was drafted when knowledge of digital platforms’ powerful underlying systems was still relatively nascent. The regulatory focus was directed at surface-level content, and the solutions were largely after-the-fact measures, such as takedowns. Government is well aware this approach is outdated, as indicated by the growing language around platforms’ “systems and processes” and calls for “systemic” regulation in the new issues paper on online safety released this week. 

The shift to regulating systems arises from a realisation that content can harm — but not all harms come from content. Arguably the worst of online harms stems from the underlying digital systems. And as privacy advocates remind us, these systems are entangled with an unruly and unregulated data market. 

Content-recommender systems (also known as algorithms) are routinely found to actively promote content harmful to mental and physical health, such as pro-suicide or pro-eating disorder content. A recent study found variable results in platform mitigations: TikTok appears to block algorithmic distribution of eating disorder content, indicating it is operationally feasible; Instagram’s algorithm, however, was only partially effective at preventing this type of content from spreading into young people’s feeds. Over on X, the algorithm takes users from pro-eating disorder content to pro-suicide content after fewer than 40 posts. 

It suits bombastic tech executives to keep regulatory fights to reactive measures like content takedowns. Proactive measures targeting platforms’ underlying systems would require a substantial redistribution in company resourcing to answer tough consumer safety questions — questions that any other sector is well used to asking.

A systems-first approach recognises that, irrespective of the offline harm, the tools platforms have to mitigate risks are the same. Whether the harm is eating disorders, crypto scams, male violence or election integrity, the technical options do not change — such as adding in more checks before approving paid–for ads, building in safety measures to ensure those ads don’t target vulnerable consumers, or preventing certain types of content from being algorithmically boosted.

Platforms have the tools and expertise to do all of this and more, but no incentives to do it consistently, transparently and well. 

Who writes the rules? 

Another legacy from a more hopeful era in digital platform regulation is the industry codes process. Whether they are voluntary (as for misinformation and disinformation) or mandatory (as for online safety), the industry “holds the pen” on drafting. The technical input that the industry can provide to these exercises should not be dismissed, but their outsized influence over drafting outcomes should be. The industry’s input should be, at the very least, reduced to match the input presently extended to public interest researchers and affected communities.

These groups, including this author’s employer, Reset.Tech Australia, routinely demonstrate more knowledge and sharper, country-specific evidence about how digital platforms’ systems function than the digital industry’s lobbyists.

Harm will continue to happen while we wait for industry-led regulation to fail. Government is gradually locating the “systemic” tools it needs, but these efforts will be for naught if they continue asking the industry to finish off the job. The task ahead for government is unenviable, but targeting digital platforms at the root of their harms and confronting the pointy end of their power is the only way out. 

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.