Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Guardian - US
The Guardian - US
World
Nick Robins-Early

Disinformation reimagined: how AI could erode democracy in the 2024 US elections

Collage of close-ups of Biden, Trump, Fauci, and a military jet.

A banal dystopia where manipulative content is so cheap to make and so easy to produce on a massive scale that it becomes ubiquitous: that’s the political future digital experts are worried about in the age of generative artificial intelligence (AI).

In the run-up to the 2016 presidential election, social media platforms were vectors for misinformation as far-right activists, foreign influence campaigns and fake news sites worked to spread false information and sharpen divisions. Four years later, the 2020 election was overrun with conspiracy theories and baseless claims about voter fraud that were amplified to millions, fueling an anti-democratic movement to overturn the election.

Now, as the 2024 presidential election comes into view, experts warn that advances in AI have the potential to take the disinformation tactics of the past and breathe new life into them.

AI-generated disinformation not only threatens to deceive audiences, but also erode an already embattled information ecosystem by flooding it with inaccuracies and deceptions, experts say.

“Degrees of trust will go down, the job of journalists and others who are trying to disseminate actual information will become harder,” said Ben Winters, a senior counsel at the Electronic Privacy Information Center, a privacy research non-profit. “It will have no positive effects on the information ecosystem.”

New tools for old tactics

Artificial intelligence tools that can create photorealistic images, mimic voice audio and write convincingly human text have surged in use this year, as companies such as OpenAI have released their products on the mass market. The technology, which has already threatened to upend numerous industries and exacerbate existing inequalities, is increasingly being employed to create political content.

In past months, an AI-generated image of an explosion at the Pentagon caused a brief dip in the stock market. AI audio parodies of US presidents playing video games became a viral trend. AI-generated images that appeared to show Donald Trump fighting off police officers trying to arrest him circulated widely on social media platforms. The Republican National Committee released an entirely AI-generated ad that showed images of various imagined disasters that would take place if Biden were re-elected, while the American Association of Political Consultants warned that video deepfakes present a “threat to democracy”.

In some ways, these images and ads are not so different from the manipulated images and video, misleading messages and robocalls that have been a feature of society for years. But disinformation campaigns formerly faced a range of logistic hurdles – creating individualized messages for social media was incredibly time consuming, as was Photoshopping images and editing videos.

Now, though, generative AI has made the creation of such content accessible to anyone with even basic digital skills, amid limited guardrails or effective regulation to curtail it. The potential effect, experts warn, is a sort of democratization and acceleration of propaganda right at a time when several countries enter major election years.

AI lowers the bar for disinformation

The potential harms of AI on elections can read like a greatest hits of concerns from past decades of election interference. Social media bots that pretend to be real voters, manipulated videos or images, and even deceptive robocalls are all easier to produce and harder to detect with the help of AI tools.

There are also new opportunities for foreign countries to attempt to influence US elections or undermine their integrity, as federal officials have long warned Russia and China are working to do. Language barriers to creating deceptive content are eroding, and telltale signs of scammers or disinformation campaigns using repetitive phrasing or strange word choices are being replaced with more believable texts.

“If you’re sitting in a troll farm in a foreign country, you no longer need to be fluent to produce a fluent-sounding article in the language of your target audience,” said Josh Goldstein, a research fellow at Georgetown University’s Center for Security and Emerging Technology. “You can just have a language model spit out an article with the grammar and vocabulary of a fluent speaker.”

AI technology may also intensify voter suppression campaigns to target marginalized communities. Two far-right activists admitted last year to making more than 67,000 robocalls targeting Black voters in the midwest with election misinformation, and experts such as Winters note that AI could hypothetically be used to replicate such a campaign on a greater scale with more personalized information. Audio that mimics elected leaders or trusted personalities could tell select groups of voters misleading information about polls and voting, or cause general confusion.

Generating letter-writing campaigns or fake engagement could also create a sort of false constituency, making it unclear how voters are actually responding to issues. As part of a research experiment published earlier this year, Cornell University professors Sarah Kreps and Doug Kriner sent tens of thousands of emails to more than 7,000 state legislators across the country. The emails purported to be from concerned voters, but were split between AI-generated letters and ones written by a human. The responses were virtually the same, with human-written emails receiving only a 2% higher rate of reply than the AI-generated ones.

Campaigns test the waters

Campaigns have already begun dabbling in using AI-generated content for political purposes. After Florida’s governor, Ron DeSantis, announced his candidacy during a Twitter live stream in May, Donald Trump mocked his opponent with a parody video of the announcement that featured the AI-generated voices of DeSantis, Elon Musk and Adolf Hitler. Last month, the DeSantis campaign shared AI-generated images of Trump embracing and kissing Anthony Fauci.

During the 2016 and 2020 elections, Trump’s campaign leaned heavily on memes and videos made by his supporters – including deceptively edited videos that made it seem like Biden was slurring his words or saying that he shouldn’t be president. The AI version of that strategy is creeping in, election observers warn, with Trump sharing a deepfake video in May of the CNN host Anderson Cooper telling viewers that they had just watched “Trump ripping us a new asshole here on CNN’s live presidential town hall”.

With about 16 months to go until the presidential election and widespread generative AI use still in its early days, it’s an open question what role artificial intelligence will play in the vote. The creation of misleading AI-generated content alone doesn’t mean that it will have an effect on an election, researchers say, and measuring the impact of disinformation campaigns is a notoriously difficult task. It’s one thing to monitor the engagement of fake materials but another to gauge the secondary effects of polluting the information ecosystem to the point where people generally distrust any information they consume online.

But there are concerning signs. Just as the use of generative AI is increasing, many of the social media platforms that bad actors rely on to spread disinformation have begun rolling back some of their content moderation measures – YouTube reversed its election integrity policy, Instagram allowed the anti-vaccine conspiracy theorist Robert F Kennedy Jr back on its platform and Twitter’s head of content moderation left the company in June amid a general fall in standards under Elon Musk.

It remains to be seen how effective media literacy and traditional means of factchecking can be in pushing back against a deluge of misleading text and images, researchers say, as the potential scale of generated content represents a new challenge.

“AI-generated images and videos can be created much more quickly than factcheckers can review and debunk them,” Goldstein said, adding that hype over AI can also corrode trust by making the public believe anything could be artificially generated.

Some generative AI services, including ChatGPT, do have policies and safeguards against generating misinformation and in certain cases are able to block the service from being used for that purpose. But it’s still unclear how effective those are, and several open-source models lack such policies and features.

“There’s not really going to be sufficient control of dissemination,” Winters said. “There’s no shortage of robocallers, robo emailers or texters, and mass email platforms. There’s nothing limiting the use of those.”

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.