An explosion of new artificial intelligence (AI) tools which can "clone" people's voices and impersonate them in videos is sweeping the internet, prompting calls for greater information transparency and literacy.
While these generative AI aids can help creatives working across a range of professions, from marketing and sales to education, translation and news publishing, they can also be used to scam and manipulate people.
This reporter tested some of the more popular AI video platforms.
How do generative AI videos work?
Generative AI is distinct from other forms of AI in that it creates - or generates - brand new text, images, video or audio from existing content it's been trained on.
Using just two minutes of a real recording of a person talking, a video "clone" of the individual can be created using a generative AI tool.
This "clone" can then thereafter be instructed to say anything by typing in a script, producing a fairly realistic impersonation of the person "talking", including accent, pitch and tone.
Some tools allow users to automatically translate the clone's voice into multiple languages, again largely retaining the original person's tone.
A start-up streaming global news channel called Channel 1 plans to launch in 2024 using AI hosts which look like real people, promising news generated by real journalists but delivered by AI avatars.
Other AI video platforms allow users to draw on vast libraries of stock images and footage to create new content.
The rise of AI scams and manipulation
But misinformation experts are worried bad actors could co-opt the tools, readily available and sometimes free online, to spread lies or scam people.
In fact, they are already doing it.
"The popularity of AI video makers and other generative AI models have led to concerns about their impact on the information environment, especially how they might contribute to a larger volume of mis- or dis-information created in better quality and perhaps personalised to target a specific group(s) of people," RMIT CrossCheck bureau editor Esther Chan said.
She said an AI-generated voice was added to genuine footage of Prime Minister Anthony Albanese and then posted to social media to make it sound like he was promoting a financial scam.
A real video of Treasurer Jim Chalmers was also edited using AI to make it appear he was promoting an investment platform.
And another AI-doctored video showed up on social media promoting the "no" to a Voice to Parliament.
AI-generated videos of the Israel-Hamas war have also been shared widely online, while former US president Donald Trump is a popular target for manipulated videos and images.
We need to work together
RMIT CrossCheck director Anne Kruger said the AI-driven misinformation challenge required a collective approach.
"The technology promises new opportunities in terms of creativity and saving time to get on to other tasks," she said.
"We must be very real about what is at stake here: the technology has the ability to spread harmful information that can damage our democracy on a spectrum - at one end sowing doubts to distrust and misunderstanding - causing behavioural changes or consumers to act on unsound information."
Dr Kruger said people should be sceptical and ask themselves "who is posting this?" and "where did they get this information?"
How will AI change our lives in 2024?
2023 was a catalyst period for AI - the start of the waterfall - but 2024 will be the year when AI meets the real world, CSIRO's National Artificial Intelligence Centre director Stela Solar said.
"[In 2023] we saw with generative AI tools the scale and pace of adoption was unlike anything we've seen before," she said.
OpenAI's ChatGPT reached an estimated 100 million people within the first two months of launching to the public and, in a "revolution", people were suddenly able to easily interact with AI tools.
Ms Solar said around one third of people have tried generative AI at work, but most of them - around 68 per cent - are not telling anyone they're doing it.
"Many of us are trying it out," she said. "We're wanting to learn what it is. We want to see how it might shape our life for our work, but we are not yet engaged with the mature systems in making sure that it's robust."
Standards and regulations were trying to keep up.
"2024 will be the moment for us to step up in terms of those benchmarks best practices, so that we ensure that the outcomes are safe and responsible," Ms Solar said.
Why generative AI is like soft serve ice cream
"I often think of generative AI as a soft serve machine ... you push the button and ice cream will come out but you actually don't know if it is good for you," she said.
There was much work to be done to make sure AI was accurate, unbiased, fair to creators and intellectual property holders, and that the parameters surrounding it were robust and responsible.
"The concept of grounding has become paramount to well-operating generative AI tools," Ms Solar said.
And AI doesn't work in a vacuum.
Existing legal systems still apply and the people who use AI are still responsible for what it produces.
AI in the real world
"2024 will be about making AI real in our context of work and life," she said.
"Trust will be incredibly critical as organisations innovate."
Marketing, customer service and agricultural processes could be improved, along with scientific research tools grounded in accurate, reliable information, while there were opportunities for environmental projects to optimise sustainable satellite technology and wind farm layout, for example.
There were also AI health applications to improve access to medical advice for remote communities.
"It could actually create a lot of value and benefits for the world, but the key element is that it's scaling everything and it's accelerating everything," Ms Solar said.