
Since 2024, the RECapture research team has been monitoring political disinformation and advertising in Australia.
Our focus is on WeChat, the primary news and information platform for Chinese speakers in Australia, and RedNote (Xiaohongshu), an emerging Chinese information sharing platform similar to Instagram.
Hundreds of thousands of people in Australia use these platforms. They’re often a main source of news.
Our research reveals while Australian news media often focus on foreign interference, in this election cycle, disinformation is being driven by commercial and domestic political interests.
These pose substantial threats to Chinese Australian communities and our democracy.
What is disinformation?
Defining disinformation often hinges on three criteria:
the truthfulness of the content
the intent behind its creation and dissemination
the harm it causes.
However, findings from our 2023 study on the Voice referendum challenge those assumptions. Disinformation isn’t as simple as true or false. It can involve ambiguous intent and produce harm that’s difficult to measure.
Further, Australia’s lack of clear definition for online misinformation and disinformation presents significant challenges for researchers and regulators.
With these limitations, we focus on deliberate misrepresentations of policy positions and the manipulation of political speech intended to influence voter behaviour.
What have we discovered?
We found examples that misrepresented political statements and policies and capitalised on preexisting concerns within migrant communities.
Concerns include potential changes to investor visas, undocumented migration, humanitarian programs and Australia’s diplomatic relations with India, the US and China.
We also found several strategies, such as:
exaggerating the likelihood of events (like the revival of the Significant Investment Visa - an invitation-only visa for those investing at least A$5 million in certain sectors)
manipulating timelines and contexts (like re-hyping past news stories to create the impression the events are happening in the present)
and misaligning visuals and text to suggest misleading interpretations.
While we’re working to better understand who’s behind these cases, we know they’re not political parties. Here are two examples.
This post on RedNote, published in April, referred to several statements, including Prime Minister Anthony Albanese’s speech at the Future of Western Sydney Summit. Albanese stated the government had a “balanced” immigration ratio.
However, the Chinese-language text accompanying the post omitted Labor’s past immigration policies and misrepresented the speech:
Labor grants amnesty to all? Albo embraces immigrants! Good news for Chinese people!
Discussions in the comments largely favoured a class-based immigration system. Users argued the Labor government disproportionately favoured humanitarian immigrants and greater preference should be given to upper and middle-class migrants.
We also found examples on WeChat.
On March 4, the Chinese-language media outlet AFN Daily published an article with the provocative headline:
I am furious! How shameless! Australia is really going to be in chaos!
The headline was sensational and intentionally ambiguous. It attracted reader attention to click through past four advertisements, including one political ad by the Liberal candidate for Bennelong, Scott Yung.
The article claimed the Coalition’s support had surpassed Labor’s, while presenting a segment of a poll in which Labor had actually received greater voter support for its welfare, healthcare and education policies.
The article further claimed the Labor Party had naturalised 12,500 new citizens – predominantly of Indian origin – in an attempt to sway the Chinese audience.
This claim had been explicitly refuted by Tony Burke back in February.
The article challenged this assertion by Burke and reinforced anti-Labor sentiment through racially charged narratives. It emphasised the strengthening diplomatic relations between Australia and India, and highlighted the growing number of South Asian and Middle Eastern migrants in comparison to Chinese migrants.
We also observed ad hoc disinformation narratives triggered by natural disasters or public emergencies.
For example, there was a claim on WeChat suggesting “the election is cancelled because of Cyclone Alfred.” Such disinformation requires timely intervention to prevent its rapid spread and impact.
Why is this so harmful?
The harms of disinformation are often more severe on digital media used by marginalised communities. Our research shows a few reasons why.
The limited regulatory oversight of these platforms makes the harms hard to fully identify and prevent.
Australian regulatory bodies keep intervention to address disinformation on these platforms to a minimum. This reflects broader national concerns around cybersecurity and foreign interference.
Unfortunately, this has resulted in a largely unregulated environment where political disinformation thrives during election cycles.
Finally, we see persistent disinformation narratives – from 2019, 2022, 2023 (around the Voice referendum), through to 2025 – where racial stereotypes intersect with partisan biases.
What can be done?
For Chinese-language platforms, our findings suggest disinformation might be less a product of foreign political actors, propaganda or linguistic barriers. What’s more important are the insular structure of WeChat and RedNote’s media ecosystems.
Tailored civic education and media literacy initiatives can help users to spot disinformation. Currently, grassroots debunking efforts are largely done by community members who comment beneath posts.
But more broadly, we need to support the public to think critically when reading digital news. This would help mitigate the exploitation of racial and gender biases for clicks and political point-scoring.
While automation is sometimes used to detect and debunk disinformation, its application is limited here. WeChat and RedNote prohibit external automated tools. Their own systems for flagging content generated by artificial intelligence don’t always work either.
Individual and coordinated human effort remains the best way to accurately inform Australian communities of their choices this election. This applies whether these communities tune in to mainstream broadcasts, major US-based social media platforms or Chinese language apps.
The authors would like to thank researchers Dan Dai, Stevie Zhang, and Mengjie Cai for their contributions to this project.

The research project is funded by the Susan McKinnon Foundation for the period 2024-2025.
Robbie Fordyce is a member of the grants panel for the Australian Communication Consumer Action Network (ACCAN). He has previously worked on studies of online political content that has been funded by the Australian Research Council and by ACCAN.
Luke Heemsbergen does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
This article was originally published on The Conversation. Read the original article.