Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Al Jazeera
Al Jazeera
Andy Hirschfeld

X’s Grok2AI chatbot escalates problem of deepfakes ahead of US elections

'Deepfake' videos that manipulate reality are becoming more sophisticated and realistic as a result of advances in artificial intelligence, creating a potential for new kinds of misinformation [File: Rob Lever/AFP]

In August, X, the social media company once known as Twitter, publicly released Grok 2, the latest iteration of its AI chatbot. With limited guardrails, Grok has been responsible for pushing misinformation about elections and allowing users to make life-like artificial intelligence-generated images – otherwise known as deepfakes – of elected officials in ethically questionable positions.

The social media giant has started to rectify some of its problems. After election officials in Michigan, Minnesota, New Mexico, Pennsylvania and Washington wrote to X head Elon Musk alleging that the chatbot produced false information about state ballot deadlines, X now points users to Vote.gov for election-related questions.

But when it comes to deepfakes, that’s a different story. Users are still able to make deepfake images of politicians doing questionable and, in some cases, illegal activities.

Just this week, Al Jazeera was able to make lifelike images that show Texas Republican Senator Ted Cruz snorting cocaine, Vice President Kamala Harris brandishing a knife at a grocery store, and former President Donald Trump shaking hands with white nationalists on the White House lawn.

In the weeks prior, filmmakers The Dor Brothers made short clips using Grok-generated deepfake images showing officials including Harris, Trump and former President Barack Obama robbing a grocery store, which circulated on social media. The Dor Brothers did not respond to a request for comment.

Screenshot from the clip made by The Dor Brothers [X/@thedorbrothers]

The move has raised questions about the ethics behind X’s technology, especially as some other companies like OpenAI, amid pressure from the White House, are putting safeguards in place to block certain kinds of content from being made. OpenAI’s image generator Dall-E 3 will refuse to make images using a specific public figure by name. It has also built a product that detects deepfake images.

“Common sense safeguards in terms of AI-generated images, particularly of elected officials, would have even been in question for Twitter Trust and Safety teams pre-Elon,” Edward Tian, co-founder of GPTZero, a company that makes software to detect AI-generated content, told Al Jazeera.

Grok’s new technology escalates an already pressing problem across the AI landscape – the use of fake images.

While they did not use Grok AI, as it was not yet on the market, just in this election cycle, the now-suspended campaign of Florida Governor Ron DeSantis used a series of fake images showing Anthony Fauci, a key member of the US task force that was set up to tackle the COVID-19 pandemic, and Trump embracing, which the AFP news agency debunked. These were intertwined with real images of them in meetings.

The gimmick was intended to undermine Trump by embellishing his ties to Fauci, an expert adviser with no authority to make policy. Trump’s voter base had blamed Fauci for the spread of the pandemic instead of holding Trump accountable.

Trump’s use of fake images

While Trump was targeted in that particular case by the DeSantis campaign, he and his surrogates are often the perpetrators.

Screenshot from the clip made by The Dor Brothers [X/@thedorbrothers]

The Republican National Committee used AI-generated images in an advertisement that showed the panic of Wall Street if Biden, who was the presumptive Democratic nominee at the time, were to win the election. The assertion comes despite markets performing fairly well under Biden in his first term.

In the last few weeks, Trump has posted fake images, including one that suggested that Harris spoke to a group of communists at the Democratic National Convention.

On Monday, Musk perpetuated Trump’s inaccurate representation of Harris’s policies. Musk posted an AI-generated image of Harris wearing a hat with a communist insignia – to suggest that Harris’s policies align with communism – an increasingly common and inaccurate deflection Republicans have used in recent years to describe the Democratic Party’s policy positions.

The misleading post comes as Musk is accused of facilitating the spread of misinformation across the globe. X faces legal hurdles in jurisdictions including the European Union and Brazil, which blocked access to the website over the weekend.

This comes weeks after Trump reposted on his social media platform Truth Social a fake image that inaccurately alleged that singer Taylor Swift endorsed him and that her loyal fans, colloquially referred to as “Swifties”, supported.

There are vocal movements on both sides of the political spectrum tied to Swift’s fans, but none of which is officially connected to the pop star.

One of the images Trump shared showing “Swifties for Trump”, was labelled as satire and came from the account Amuse on X. The post was sponsored by the John Milton Freedom Foundation (JMFF), a group that alleges it empowers independent journalists through fellowships.

“As [a] start-up nonprofit, we were fortunate to sponsor, at no cost, over 100 posts on @amuse, a good friend of JMFF. This gave us over 20 million free impressions over a period of a few weeks, helping our exposure and name ID. One of those posts was clearly marked as ‘SATIRE’, making fun of ‘Swifties for Trump’. It was clearly a joke and was clearly marked as such. It was later responded to by the Trump campaign with an equally glib response of ‘I accept’. End of our participation with this, aside from what was a small smile on our behalf,” a JMFF spokesperson told Al Jazeera in a statement.

The group has fellows known for spreading misinformation and unverified far-right conspiracy theorists, including Lara Logan, who was banned from the right-wing news channel Newsmax after a conspiracy-laden tirade in which she accused world leaders of drinking children’s blood.

The former president told Fox Business that he is not worried about being sued by Taylor because the images were made by someone else.

The Trump campaign did not respond to a request for comment.

Blame game

That is part of the concern of the watchdog group Public Citizen that various stakeholders will shift the blame to evade accountability.


In June, Public Citizen called on the Federal Election Commission (FEC) to curb the use of deepfake images as it pertains to elections. Last year in July, the watchdog group petitioned the agency to address the growing problem of deepfakes in political advertisements.

“The FEC, in particular some of the Republican commissioners, have a clear anti-regulatory bent across the board. They have said that they don’t think that the FEC has the ability to make these rules. They sort of toss it back to Congress to create more legislation to empower them. We completely disagree with that,” Lisa Gilbert, Public Citizen co-president, told Al Jazeera.

“What our petition asks them to do is simply apply a longstanding rule on the books, which says you can’t put forth fraudulent misrepresentations. If you’re a candidate or a party, you basically can’t put out advertisements that lie directly about things your opponents have said or done. So it seems very clear to us that applying that to a new technology that’s creating that kind of misinformation is an obvious step and clarification that they should easily be able to do so,” Gilbert added.

In August, Axios reported that the FEC would likely not enact new rules on AI in elections during this cycle.

“The FEC is kicking the can down the road on one of the most important election-related issues of our lifetime. The FEC should address the question now and move forward with a rule,” Gilbert said.

The agency was supposed to vote on whether to reject Public Citizen’s proposal on Thursday. A day before the open meeting, Bloomberg reported that the FEC will vote on whether to consider proposed regulations on AI in elections on September 19.

TV, cable and radio regulator, the Federal Communication Commission (FCC), is considering a plan that would require political advertisements that use AI to have a disclosure, but only if they are used on TV and radio platforms.

The rule does not apply to social media companies. It also puts the responsibility on a candidate rather than the maker of a product that allows consumers to create deepfake photos. Nor does it hold responsible individual bad actors who may make the content but are not involved with a campaign.

FEC Commissioner Sean Cooksey has pushed back on the FCC and said the latter does not have jurisdiction to make such a ruling even as the FCC says it does.

“The FCC plans to move forward with its thoughtful approach to AI disclosure and increased transparency in political ads,” an FCC spokesperson told Al Jazeera in a statement.

The FEC declined a request for comment.

At the moment, there is no law on the books at the federal level that bans or requires disclosure of the use of AI in political advertisements, and it is the responsibility of social media companies themselves to monitor and remove deepfakes on their respective platforms.

While there are several bills that require social media platforms to have safeguards, it is not clear if they will pass, let alone be enacted into law in time for the 2024 election. Bills like the bipartisan Protect Elections from Deceptive AI Act face stiff opposition, including from Senate Minority Leader Mitch McConnell.

This comes alongside a bill introduced in late July that tackles deepfakes. More broadly called the NO FAKES Act, the bill protects all individuals, famous or otherwise, from unauthorised use of their likeness in computer-generated video, photos or audio recordings.

“There is interest on all sides to try to avoid misleading consumers into believing something that is factually untrue,” Rob Rosenberg, founder and principal of Telluride Legal Strategies, told Al Jazeera.

There is strong bipartisan consensus for the NO FAKES bill authored by Democrat Senators Chris Coons (Delaware) and Amy Klobuchar (Minnesota) and Republican Senators Marsha Blackburn (Tennessee) and Thom Tillis (North Carolina).

“For the first time, it feels like there is a good chance that we’re going to have a federal act that protects these kinds of rights,” Rosenberg added.

However, it is not clear if the bill will be enacted into law by election day. There has been more traction for action at the state level.

“Unlike at the federal level, there’s been a huge response from elected officials to pass these bills,” Gilbert said.

Patchwork of laws

State legislatures in both Republican and Democrat-led states enacted a policy that bans or requires a disclosure of the use of deepfakes in campaign advertisements, but it is a patchwork with some more stringent than others. While most states have laws on the books that require disclosures on deepfakes, a handful including Texas and Minnesota have prohibitions.

AI fakery is quickly becoming one of the biggest problems online. This photo-illustrated graphic highlights a few notable areas of a deepfake of Pope Francis [AP Photo]

Texas passed a law in 2019 that bans the use of deepfake videos to harm a candidate or influence an election, but it is applicable only 30 days before an election and it does not specify the use of deepfake photos or audio. Failure to comply can result in a $4,000 fine and up to a year in prison.

State leaders there are actively evaluating policies about regulating the sector. As recently as last week, there was a hearing to discuss how to regulate AI in the state. Austin – the state’s capital and hub for the tech industry – is where Musk is set to move X’s headquarters from San Francisco, California.

Minnesota, on the other hand, enacted its prohibition in 2023 and bars the use of all deepfake media 90 days before the election. Failure to comply can come with fines of up to $10,000, five years in prison or both.

As of the end of July, 151 state-level bills were introduced or passed this year to address AI-generated content, including deepfakes and chatbots.

Overall, the patchwork of laws does not put pressure on social media platforms and the companies that make tools that allow bad actors to create deepfakes.

“I certainly think the corporations are responsible,” Gilbert, of Public Citizen, said, referring to social media platforms that allow deepfake posts. “If they don’t take it down, they should be held liable.”

“This is an issue across the political spectrum. No one is immune to sprouting conspiracy theories,” GPTZero’s Tian added.

Musk, who purveyed misinformation himself, has shown reluctance to police content least for users he agrees with politically. As Al Jazeera previously reported, Musk has emboldened conservative voices while concurrently censoring liberal groups like White Dudes 4 Harris.

An Al Jazeera request for comment received an automated message from X: “Busy now, please check back later.”

The rise of deepfakes is not just a concern for those who need to debunk fake images but those who use their prevalence as a way to create doubt around verifiable images. After a large Harris rally in Detroit, Michigan on August 7, Trump inaccurately claimed that photos of the event were AI-generated.

“AI is already being weaponised against real images. People are questioning verifiable images,” Tian added. “At the end of the day, the casualty here is the truth.”

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.