Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Guardian - AU
The Guardian - AU
World
Jonathan Yerushalmy

AI deepfakes come of age as billions prepare to vote in a bumper year of elections

Composite image featuring a computer screen with a close-up on the Synthetic Partys AI-driven chatbot and US president Joe Biden
With more people heading to the polls in 2024 than ever before, some AI companies have already banned deepfakes of Joe Biden and Donald Trump. Composite: AFP / Getty / AP

“What a bunch of malarkey.”

Gail Huntley recognised the gravelly voice of Joe Biden as soon as she picked up the phone. Huntley, a 73-year-old resident of New Hampshire, was planning to vote for the president in the state’s upcoming primary, so she was confused that a pre-recorded message from him was urging her not to.

“It’s important that you save your vote for the November election,” the message said. “Voting this Tuesday only enables the Republicans in their quest to elect Donald Trump again.”

Huntley quickly realised that that call was fake, but assumed Biden’s words had been taken out of context. She was shocked when it became clear that the recording was AI generated. Within weeks the US had outlawed robocalls that use voices generated by AI.

The Biden deepfake was the first major test for governments, tech companies and civil society groups, who are all locked in heated debate over how best to police an information ecosystem in which anyone can create photo-realistic images of candidates, or replicate their voices with frightening accuracy.

Citizens of dozens of countries – including the US, India and most likely the UK – will go to the polls in 2024, and experts say the democratic process is at serious risk of being disrupted by artificial intelligence.

AI fakes have already been used in elections in Slovakia, Taiwan and Indonesia, and they’re being launched into an environment in which trust in politicians, institutions and the media is already low.

Watchdogs are warning that with more than 40,000 layoffs at the tech companies that host and moderate much of this content, digital media is uniquely vulnerable to exploitation.

Mission impossible?

For Biden, concerns about the potential dangerous uses of AI were expedited after he watched the latest Mission Impossible movie. Over a weekend at Camp David, the president relaxed in front of the film, which sees Tom Cruise’s Ethan Hunt face down a rogue AI.

The deputy White House chief of staff, Bruce Reed, said that if Biden hadn’t already been concerned about what could go wrong with AI “he saw plenty more to worry about”, after watching the movie.

Mission: Impossible – Dead Reckoning Part One sees Ethan Hunt go up against a rogue AI.
Mission: Impossible – Dead Reckoning Part One sees Ethan Hunt go up against a rogue AI. Photograph: FlixPix/Alamy

Since then, Biden has signed an executive order that requires leading AI developers to share safety test results and other information with the government.

And the US is not alone in taking action: the EU is close to passing one of the most comprehensive laws to regulate AI – however it won’t come into effect until 2026. Proposed regulation in the UK has been criticised for moving too slow.

However, the US is where many of the most transformative tech companies are based, so the actions of the White House will have a profound effect on how the most disruptive AI products are developed.

Katie Harbath – who helped develop policy at Facebook for 10 years and now works on trust and safety issues with tech companies – says the US government’s actions don’t go far enough. Concerns about stifling innovations – especially as China moves ahead developing its own AI industry – could play into this, she says.

Harbath has had a ringside seat to how the information system has evolved – from the “golden age” of social media’s growth, through the great reckoning that came after the Brexit and Trump votes and the subsequent efforts to stay ahead of disinformation.

Her mantra for 2024 is “panic responsibly”.

She says that in the short term, regulating and policing AI-generated content will fall to the very companies that are developing the tools to create it.

“We just don’t know if the companies are prepared,” says Harbath. “There are also newer platforms for whom this election season is their first real test.”

Last week, major tech companies took a big step towards coordinating their efforts with the signing of an agreement to voluntarily adopt “reasonable precautions” to prevent AI from being used to disrupt democratic elections around the world.

Among the signatories are ChatGPT creator OpenAI, as well as Google, Adobe and Microsoft – all of whom have launched tools to generate AI-created content. Many companies have also updated their own rules and are banning the use of their products in political campaigns. Enforcing these bans is another matter.

The Munich security conference where tech companies signed an agreement to prevent AI from being used to disrupt elections around the world.
The Munich security conference where tech companies signed an agreement to prevent AI from being used to disrupt elections around the world. Photograph: dts News Agency Germany/REX/Shutterstock

Open AI, whose powerful Dall-E software has been used to create photo-realistic images, has said its tool will decline requests that ask for image generation of real people, including candidates.

Midjourney – whose AI image generation is agreed by many to be the most powerful and accurate – says users may not use the product for political campaigns “or to try to influence the outcome of an election”.

Midjourney’s CEO, David Holz, has said the company is close to banning political images, including those of the leading presidential candidates. Some changes appear to have already come into effect. When the Guardian asked Midjourney to generate an image of Joe Biden and Donald Trump in a boxing ring, the request was denied and flagged as falling foul of the company’s community standards.

However, when the same prompt was input, but with Biden and Trump replaced by UK prime minister Rishi Sunak and opposition leader Keir Starmer, the software generated a series of images without any problems.

The example goes to the heart of concerns among many policymakers, over how effectively tech companies are regulating AI-created content outside the hothouse of the US presidential election.

‘A multimillion-euro weapon of mass manipulation’

Despite OpenAI’s ban on using their tools in political campaigning, Reuters reported that their products were used widely in this month’s election in Indonesia – to create campaign art, track social media sentiment, build interactive chatbots, and target voters.

Harbath says it’s an open question as to how proactively newer companies such as OpenAI can enforce their policies outside the US.

“Every country is a little different, with different laws and cultural norms … When you have US-focused companies it can be hard to realise that the way things work in the US are not how things work elsewhere.”

Cartoon versions of Indonesia presidential candidate Prabowo Subianto were produced using generative AI.
Cartoon versions of Indonesia presidential candidate Prabowo Subianto were produced using generative AI. Photograph: Willy Kurniawan/Reuters

Slovakia’s national elections last year pitted a pro-Russian candidate against another who advocated for maintaining stronger ties with the EU. With support for Ukraine’s war efforts on the ballot, the vote was highlighted by EU officials as potentially at risk of interference by Russia and its “multimillion-euro weapon of mass manipulation”.

As the election approached and a national media blackout began, an audio recording of the pro-EU candidate, Michal Šimečka, was posted to Facebook.

In the recording, Šimečka appeared to discuss plans for how to rig the election by buying votes from marginalised communities. The audio was fake – and news agency AFP said it showed signs of being manipulated by using AI.

However, with media outlets and politicians mandated to stay silent under the election blackout laws, debunking the recording was almost impossible.

The manipulated audio appears to have fallen through a loophole in the way Facebook owner, Meta, polices AI generated material on its platforms. Under its community standards, posting content that has been manipulated in ways that are “not apparent to an average person” and in which a person has been edited to say “words that they did not say”, is banned. But this applies only to video.

Pro-Russian candidate Robert Fico went on the win the election and become prime minister.

When will we know that the future is here?

Despite the dangers, there are some signs that voters are more prepared for what’s coming than authorities may think.

“Voters are way more savvy than we give them credit for,” says Harbath. “They might be overwhelmed but they understand what’s going on in the information environment.”

For many experts, the major area of concern isn’t the technology that we’re already grappling with, but the innovations on the other side of the horizon.

Academics, writing in MIT’s Technology Review, said that when it comes to how AI may threaten our democracy, the public conversation “lacks imagination”. The real danger, they say, isn’t what we’re already scared of, but what we can’t imagine yet.

“What are the rocks we’re not looking under?” Harbath asks. “New tech happens, new bad actors appear. There is a constant ebb and flow that we need to get used to living in.”

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.