Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Guardian - AU
The Guardian - AU
Comment
Samantha Floreani and Lizzie O'Shea

We must target the root cause of misinformation. We cannot fact check our way out of this

Phone with social media apps on it
‘The underlying business model is rotten, and that ought to be the focus of our attention.’ Photograph: Jakub Porzycki/NurPhoto/REX/Shutterstock

There are once again calls for the government to Do Something about misinformation online, following the recent stabbings at Bondi Junction and the immediate spread of speculation and false claims on social media.

The previously ditched misinformation bill appears to be back on the cards. Even Peter Dutton has indicated support despite loud criticism from the Coalition towards the draft legislation last year. The original bill was itself, ironically, engulfed in misleading comments (misinformation, if you will). It remains to be seen what a revamped bill will contain.

The trouble is, the approach to tackling mis- and disinformation in Australia is fixated on surface level interventions. Legislative efforts that target the symptom, such as removal of content, factchecking and automated content moderation, however well intentioned, leave broader problems unaddressed. If we don’t deal with the underlying imperatives that make online mis- and disinformation so prevalent, we’ll just keep playing whack-a-mole.

There has always been some level of questionable information, spin, propaganda and lies disseminated via information technologies, be it through newspapers, television and radio, pamphlets and so on. But what makes mis- and disinformation so potent in the digital age is not just the speed and scale of it, but also its precision.

Because it’s not just about what you see, it’s about why you see it. Amplification, engagement and recommendation algorithms are the fuel on the fire of misinformation.

By now it is well known that social media platforms reward content that keeps people scrolling. The more time spent on a platform means more data generated and more ads sold. Polarising, controversial and sensationalist material performs well, and gets boosted accordingly. This is made worse by content recommendation systems used to curate experiences in the name of ‘personalisation’, which can take people down algorithmic rabbit holes where they are served more of the same, and increasingly extreme, content. Research suggests misinformation is possibly less effective at changing people’s beliefs than imagined, but is rather more likely to be taken as accurate when it aligns with their already-established political beliefs. The potential for recommender systems to intensify confirmation bias is worrying.

These algorithms are only possible because they are built upon vast quantities of data generated and collected by grace of lax privacy protections for decades. A United Nations report links the spread of disinformation with the rampant data collection and profiling techniques used by the online advertising industry. Misinformation as we currently experience it is in many ways a symptom of the data-extractive business model of digital platforms that prioritises engagement above all else.

On top of that, revenue sharing schemes create direct financial incentives for content creators (or anyone with an X Premium account) to create and share engaging content that is tailored for virality. That means people made money off spreading Islamophobic and antisemitic speculation after the Bondi Junction attack.

The impact of personalised disinformation is likely to be made worse in the future by developments in generative AI, which promises to deliver increasingly granular forms of customisation that until now had been impossible to achieve at scale.

If this all sounds like a disastrous mess, that’s because it is. Collective concerns such as the public interest, human rights and community responsibility can’t compete with the profit motive, and in practice are not prioritised by digital platforms.

The through-line is that these systems rely upon massive amounts of free flowing data in order to function. The commercial exploitation of personal data is a key driver of the business models of platforms that encourages and benefits from the production and spread of divisive, controversial or false content. It’s important to assess the causes of the problem at hand to identify pathways for meaningful intervention. We simply cannot fact check our way out of this.

But here is the good news: we do have the capacity to target this problem at its source. In lieu of dismantling capitalism and doing away with the profit motive entirely, one of the best tools we have at hand to put a stopper in the flow of data that fuels so many of the harmful consequences of digital platforms - including misinformation – is to create and enforce strong privacy protections. Privacy reform is on the agenda, and bold change has the potential to minimise data-extractive business models, improving our online media spaces as a result.

Verification tools and content moderation can play a role. But there are serious shortcomings of such an approach, including the risk of overreach and encroachments on rights such as freedom of expression. The underlying business model is rotten, and that ought to be the focus of our attention.

• Samantha Floreani is a digital rights activist and writer based in Melbourne/Naarm. Lizzie O’Shea is a lawyer and a founder and chair of Digital Rights Watch

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.