Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Fortune
Fortune
Jenn Brice

How election misinformation thrived online during the 2024 presidential race—and will likely continue in the days to come

person scrolls on phone

The rapid rise of generative artificial intelligence and deep-fake technologies had many dreading the year of the AI election. And as the 2024 campaign winds down, misinformation and disinformation about the election is everywhere online, from foreign bot accounts to billionaire-backed ad campaigns and the candidates themselves.

Since the 2020 presidential election, the volume and methods for disinformation campaigns have grown, says Cliff Steinhauer, director of information security and engagement at the National Cybersecurity Alliance. 

“The biggest red flag is that they're going to try to make you reactionary and get you out of your critical thinking process and into your reactionary, emotional thinking process,” he told Fortune.

A recent survey of voters by YouGov and Tech Policy Press found 65% of Americans believe election-related misinformation on social media has worsened since 2020. 

Fake bot accounts and deepfakes are on the rise, often proliferated by foreign adversaries, while platforms like X have loosened efforts to limit misleading content.

Ahead of Election Day, U.S. intelligence officials debunked social media videos, some of which originated in Russia, claiming to show election interference in Pennsylvania and Georgia. 

Meanwhile, former president Donald Trump and his running mate JD Vance repeated false claims about Haitian immigrants in Ohio that began online. And Elon Musk has boosted right-wing conspiracies about immigrant voting, often via his social media platform.

The methods of misinformation

Musk’s X was under increased scrutiny for amplifying right-wing misinformation in the days leading up to the election. The billionaire tech CEO claimed he was committed to uncensored freedom of speech when he bought the platform formerly known as Twitter for $44 billion in 2022. 

Reports show that false claims run rampant under X’s community notes feature—a crowdsourced method of fact-checking meant to replace the watchdogs laid off by Musk. Lately, with Musk all-in for Trump, that includes Musk’s inescapable self-promotion of right-wing rhetoric and political advertising, such as his false claims that Democrats “have imported massive numbers of illegals to swing states.”

X’s chatbot, Grok, also proliferated incorrect information about voting. The company only changed its answers to redirect users to Vote.gov after five secretaries of state flagged the misinformation. The election officials estimated that millions of people viewed incorrect responses to their questions about the election.

Some election officials have reportedly hand-delivered personal warnings about spreading election misinformation to Musk. A spokesperson for X told Fortune that the company has communicated with election regulators and other stakeholders to address threats, noting that its civic integrity policy will remain in place through the inauguration. Since Musk purchased Twitter, that policy no longer explicitly bans content that seeks to undermine the outcome of an election.

But X is far from the only platform harboring misinformation. Meta’s Facebook made $1 million from misleading advertisements that suggest the election might be delayed or rigged, which Meta has since removed. A deceptive ad series appearing to come from the Harris campaign, but backed by Musk and other conservatives, also thrived on Facebook.

The company blocked new ads about social issues, elections, or politics for the final week of the election—a policy it’s had since 2020. In anticipation of misinformation about vote counts and election integrity after the polls close, Meta told advertisers Monday that it will extend its ban until later this week. A spokesperson for Meta said the company employs around 40,000 people globally working on safety and security.

On the encrypted messaging app Telegram, more than 500,000 right-wing users across 50 channels stoked claims that they would question and dispute any outcome other than a Trump victory.

Foreign sources of misinformation

The FBI on Saturday warned voters about two deepfake videos that emerged ahead of Tuesday’s election. Both falsely claimed to come from the FBI, with one concerning ballot fraud and another about Kamala Harris’s husband, Doug Emhoff. 

The BBC reported that those deepfakes were part of a larger Russian operation.

Federal agencies say Russia, China, and Iran are the most prominent foreign nations spreading disinformation in the U.S. While Russia favors Trump, Iran promotes Harris. China has spread false information about both candidates. 

U.S. intelligence officials also identified a fake video of a Haitian man claiming to cast multiple votes in Georgia as a product of Russian influence. Another video that appeared to show a Pennsylvania poll worker destroying mail-in ballots was similarly attributed to Russian actors. 

Some of the other 300-plus videos impersonated news organizations and posted false claims about Harris and content about unrest and “civil war,” the BBC found. But it added that most of these videos did not get significant views from real people on X, and instead showed tens of thousands of views that came mainly from bot accounts. 

Despite the “unprecedented” surge in disinformation from foreign actors, Jen Easterly, the director of the U.S. Cybersecurity and Infrastructure Security Agency, said Monday that she hasn’t seen evidence of adversaries affecting the election. Still, she expects deceptive information about election integrity to spread online in the coming days, until Congress certifies the results on Jan. 6.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.