Hello and welcome to Eye on AI!
No, Taylor Swift did not endorse Donald Trump. Yes, the large crowds at a Kamala Harris rally were real. Do you agree?
Whether you do or not, the fact that it is even a question shows that we are all in the throes of an ongoing AI election nightmare, one in which examples of AI-generated disinformation related to the 2024 election are quickly piling up.
Just last week, Donald Trump falsely claimed that photos of large crowds at a Kamala Harris rally were generated by AI. And two days ago, Trump shared several images on Truth Social that implied Taylor Swift had endorsed him—some of which were clearly AI-generated. There was also the news that an Iranian group had used OpenAI's ChatGPT to generate divisive US election-related content; and that Elon Musk’s Grok AI model on X had spewed false information about voting.
The chaos caused by generative AI during this election season, which expands on the spreading of falsehoods that famously accompanied the 2016 election as well as the aftermath of the 2020 election, has long been predicted. Back in December, Nathan Lambert, a machine learning researcher at the Allen Institute for AI, told me that he thought AI would make the 2024 elections a “hot mess.”
It certainly feels that way to me: As Kamala Harris prepares to accept the Democratic nomination, I’m amazed to see chatter questioning whether crowds at the Democratic National Convention, as well as at Trump rallies, are real or AI-generated. As the Washington Post reported yesterday, many AI fakes are not necessarily meant to fool anyone—instead, they can simply be powerful, provocative memes meant to simply provoke, humiliate, or just grab a cheap laugh that makes a candidate’s base happy.
Either way, it feels like an insidious march toward mass self-doubt about what is real and what isn’t. I’ve noticed that even I have begun to question what I’m seeing—either assuming that everything is AI-generated or desperately scanning photos for clues.
It can, however, get worse. How about real-time live deepfake video? A tool called Deep-Fake-Cam has made the viral rounds on X over the past two weeks: With a single image of Elon Musk, for example, the developer was able to swap his face for Musk’s and present high-quality live video as the billionaire founder of Tesla and SpaceX. Combined with any one of the easy-to-use AI voice clones available today, this type of technology could offer next-level opportunities for deepfakes.
“I’ve seen a lot of deepfake tech but this one is freaking me out a little,” said Ariel Herbert-Voss, founder of RunSybil and previously OpenAI's first security research scientist, adding that Deep-Fake-Cam is even “light-invariant”—which means as light moves around the image the AI-generated image remains “in character.” That makes it “harder to detect in the moment,” he told Fortune.
Don’t expect much help from the platforms where these images and videos are shared, either. According to a panel in Chicago yesterday, hosted by the University of Southern California’s Annenberg School of Communication and Journalism, social media companies have “sharply downsized” their election integrity departments. That will lead to a surge of AI-generated media and deepfakes in the lead-up to and aftermath of the 2024 election, the panel cautioned.
“This is only August—what’s going to happen in December?” said Adam Powell III, executive director of the USC Election Cybersecurity Initiative.
With any federal AI regulation at a standstill until after the election, it looks like there is little to do but wait—and hope we wake up from this AI election nightmare.
Sharon Goldman
sharon.goldman@fortune.com
@sharongoldman