Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Fortune
Fortune
Jeremy Kahn

Advanced A.I. like ChatGPT, DALL-E, and voice-cloning tech is already raising big fears for the 2024 election

A voting booth at a polling location for the 2020 Presidential election in Louisville, Kentucky. (Credit: Scotty Perry—Bloomberg/Getty Images)

On Feb. 27, the eve of Chicago’s mayoral election, a Twitter account calling itself Chicago Lakefront News posted an image of candidate Paul Vallas, a former city budget director and school district chief who was in a tight four-way contest for the city’s top job, along with an audio recording. On the soundtrack, Vallas seems to downplay police shootings, saying that “in my day” a cop could kill as many as 18 civilians in his career and “no one would bat an eye.” The audio continues, “This ‘Defund the Police’ rhetoric is going to cause unrest and lawlessness in the city of Chicago. We need to stop defunding the police and start refunding them.”

As it turned out, Vallas said none of those things. The audio was quickly debunked as a fake, likely created with easily accessible artificial intelligence software that clones voices. The Chicago Lakefront News account, which had been set up just days before the video was posted, quickly deleted the post—but not before the tweet had been seen by thousands and widely recirculated, with some apparently tricked into believing the “recording” was authentic.

Although Vallas ultimately lost to challenger Brandon Johnson in a runoff election on April 4, the audio is not considered to have had significant influence on the outcome of the mayoral race. Still, the Vallas voice clone is a scary preview of the sort of misinformation experts say we should expect to face in the 2024 U.S. presidential election, thanks to rapid advances in A.I. capabilities.

These new A.I. systems are collectively referred to as “generative A.I.” ChatGPT, the popular text-based tool that spits out student term papers and business emails with a few prompts, is just one example of the technology. A company called ElevenLabs has released software that can clone voices from a sample just a few seconds long, and anyone can now order up photorealistic still images using software such as OpenAI’s DALL-E 2, Stable Diffusion, or Midjourney. While the ability to create video from a text prompt is more nascent—New York–based startup Runway has created software that produces clips a few seconds in length—a scammer skilled in deepfake techniques can create fake videos good enough to fool many people.

“We should be scared shitless,” says Gary Marcus, professor emeritus of cognitive science at New York University and an A.I. expert who has been trying to raise the alarm about the dangers posed to democracy by the large language models underpinning the tech. While people can already write and distribute misinformation (as we’ve seen with social media in past elections), it is the ability to do so at unprecedented volume and speed—and the fact that non-native speakers can now craft fluent prose in most languages with a few keystrokes—that makes the new technology such a threat. “It is hard to see how A.I.-generated misinformation will not become a major force in the next election,” he says.

The new A.I. tools, Marcus says, are particularly useful for a nation-state, such as Russia, where the goal of propaganda is less about persuasion than simply overwhelming a target audience with an avalanche of lies and half-truths. A Rand Corporation study dubbed this tactic “the firehose of falsehood.” The objective, it concluded, was to sow confusion and destroy trust, making people more likely to believe information shared by social connections than by experts.

Not everyone is sure the situation is as dire as Marcus suggests—at least not yet. Chris Meserole, a fellow at the Brookings Institution who specializes in the impact of A.I. and emerging technologies, says recent presidential elections have already witnessed such high levels of human-written misinformation that he isn’t sure that the new A.I. language models will make a noticeable difference. “I don’t think this will completely change the game and 2024 will look significantly different than 2020 or 2016,” he says.

Meserole also doesn’t think video deepfake technology is good enough yet to play a big role in 2024 (though he says that could change in 2028). What does worry Meserole today is voice clones. He could easily imagine an audio clip surfacing at a key moment in an election, purporting to be a recording of a candidate saying something scandalous in a private meeting. Those present in the meeting might deny the clip’s veracity, but it would be difficult for anyone to know for sure.

“It's hard to see how A.I. misinformation won't be a force in the next election.”

Gary Marcus, professor emeritus, New York University

Studies have come to conflicting conclusions on whether false narratives persuade anyone or only reinforce existing beliefs, says Sandra Wachter, a professor of technology and regulation at the Oxford Internet Institute. But in a close election, even such marginal effects could be decisive.

Faced with the threat of machine-generated fake news, some believe A.I. may itself offer the best defense. In Spain, a company called Newtral that specializes in fact-checking claims made by politicians is experimenting with large language models similar to those that power ChatGPT. While these models can’t actually verify facts, they can make humans better at debunking lies, says Newtral chief technology officer Ruben Miguez Perez. The technology can flag when a piece of content is making a factual claim worth checking, and it can detect other content promoting the same narrative, a process called “claim matching.” By pairing large language models with other machine learning software, Miguez Perez says it’s also possible to assess the likelihood of something being misinformation based on the sentiments expressed in the content. Using these methods, Newtral has cut the time it takes to identify statements worth fact-checking by 70% to 80%, he says.

The large social media platforms, such as Meta and Google’s YouTube, have been working on A.I. systems that do similar things. In the run-up to the 2020 U.S. presidential election, Facebook parent company Meta says it displayed warnings on more than 180 million pieces of content that were debunked by third-party fact-checkers. Still, plenty of misinformation slips through. Memes, which rely on both images and text to convey a point, are particularly tricky for A.I. models to catch. And while Meta says its systems have only gotten better since the 2020 election, the people promoting false narratives are continually devising new variations that A.I. models haven’t seen before.

What might make some difference, Marcus says, is sensible regulation: Those creating large language models should be required to create “digital watermarks” that make it easier for other algorithms to identify A.I.-created content. OpenAI, ChatGPT’s creator, has talked about this kind of watermarking, but has yet to implement it. Meanwhile, it has released free A.I.-content detection software, but it works only in about a third of cases. Marcus also says Congress should make it illegal to manufacture and distribute misinformation at scale. While First Amendment advocates might object, he says the framers of the Constitution never imagined technology that could produce infinite reams of convincing lies at the press of a button.

Then again, the late 18th century, when the U.S. was founded, was a golden era of misinformation as well, with anonymous pamphlets and partisan newspapers peddling scurrilous tales about opposing politicians and parties. Democracy survived then, notes Oxford’s Wachter. So perhaps it will this time too. But it could be a campaign unlike any we’ve ever witnessed before. 

A version of this article appears in the April/May 2023 issue of Fortune with the headline, “Fake news 2.0: The election threat from A.I.”

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.