Voters are being warned to be vigilant about the growth of artificial intelligence-enabled political content on social media before the next federal election.
The Australian Electoral Commission has warned AI-generated disinformation content, such as deepfake videos or robocalls from politicians, could be legal under current regulations.
AAP FactCheck has already debunked deepfakes of Prime Minister Anthony Albanese and Treasurer Jim Chalmers created by financial scammers.
A deepfake of then-Queensland premier Steven Miles dancing was posted in TikTok in July, and in September, Senator David Pocock sounded a warning about AI content by creating a deepfake of Mr Albanese.
Scammers also used AI to impersonate Sunshine Coast Mayor Rosanna Natoli in a fake Skype call in May.
Dr Nuisha Shafiabady, a computational intelligence expert at Charles Darwin University, says regulations or standards could reduce the risk of popular AI tools, such as chatbots, being used to generate political disinformation.
But she warns that even with rules, individuals wanting to spread disinformation online would likely develop their own AI tools.
Dr Shafiabady says AI content could "change your view without you even knowing it", and it's up to individuals to be wary about online content.
"We should be vigilant. That is the smartest move," she told AAP.
"Unfortunately, many of us are using social media platforms for entertainment for passing time, which is not really proper and making people more vulnerable."
A study by the UK's Alan Turing Institute analysing AI-enabled content during the US presidential elections couldn't find evidence it had affected the result.
However, it noted that was mainly because there was insufficient data about how it affected real-world voting behaviour.
"Despite this, deceptive AI-generated content did shape US election discourse by amplifying other forms of disinformation and inflaming political debates," the study concluded.
"From fabricated celebrity endorsements to allegations against immigrants, viral AI-enabled content was even referenced by some political candidates and received widespread media coverage."
Dr Marian-Andrei Rizoiu, a behavioural data scientist at the University of Technology Sydney, said more accessible AI-content tools also allowed more people to generate higher-quality deceptive content.
He said users who engaged with AI-enabled political disinformation were more likely to be recommended such content again by the AI recommendation systems used by social media platforms.
"The way it's doing that is by profiling me, and predicting what type of content would interest me," Dr Rizoiu said.
However, he said Australians shouldn't be overly worried about the impact of AI disinformation on elections.
Dr Rizoiu said when the printing press was invented, people didn't automatically believe everything they saw written, and they didn't automatically start believing everything they saw on television.
"We're just going to be at the point where as a society, if you see a video online, it may be true or maybe it's false," he said.
"But we will have, and there is already research and initiatives into tagging the provenance of content. We will have ways to check if something is true and correct."