The Federal Election Commission on Thursday was deadlocked on a request to develop regulations for AI-generated deepfake political ads.
Public Citizen, a nonpartisan advocacy group, submitted a petition last month asking the commission to establish rules, noting that advances in artificial intelligence have given political operatives the tools to produce campaign ads with computer-generated fake images that appear real. Such ads could misrepresent a candidate’s political views, a violation of existing federal law.
Robert Weissman, Public Citizen’s president, issued a blistering response after the commission voted 3-3 on the question, which meant no action would be taken.
“The Federal Election Commission just shamefully refused to use its existing authority — or even accept comments on a proposal to use its existing authority — to address the oncoming stampede of deceptive deepfakes that threaten to trample our democracy,” Weissman said in a statement. “This is a shocking failure even for a notoriously feckless agency. The FEC’s failure makes it even more imperative that Congress and states act immediately to outlaw deceptive deepfakes in elections.”
Commissioner Allen Dickerson said the commission doesn’t have the power to regulate such ads.
“With my full support the commission has asked Congress to expand our authority, and I understand draft legislation has been introduced, but right now the only fraud we’re entitled to police is where an agent of one candidate pretends to be the agent of another or where a person raises funds by fraudulently claiming to be acting on behalf of the campaign with which he or she is unaffiliated,” Dickerson said during Thursday’s meeting.
Dara Lindenbaum, the commission’s chairwoman, said she agreed with Dickerson that the FEC lacks the jurisdiction to issue such regulations. But she voted in support of the motion to accept public comment in advance of making new rules, saying the process would be helpful to policymakers.
“I also share Commissioner Dickerson’s concern about whether or not we have any jurisdiction here,” she said. “I’m skeptical that we do, but during this process I hope that we get some really wonderful comments in from people who may not usually comment … that may have ideas that may help us or Congress. So I think it’s a process that’s worth it.”
Several measures to address emerging AI technology have been introduced in Congress. Rep. Yvette D. Clarke, D-N.Y., introduced a bill that would extend the Federal Election Campaign Act’s disclosure rules for TV and radio advertising to online content and add another notice requirement if it used AI-generated material.
“When you think about our adversaries that have been tampering with our [elections], this is another tool that they can access, knowing that we’re vulnerable in an open society to misinformation and disinformation,” Clarke told CQ Roll Call last month.
Democrats Amy Klobuchar of Minnesota, Cory Booker of New Jersey and Michael Bennet of Colorado recently introduced similar legislation in the Senate, while Majority Leader Charles E. Schumer has met with Sens. Martin Heinrich, D-N.M., Mike Rounds, R-S.D., and Todd Young, R-Ind., to brainstorm bipartisan legislation.
The use of the technology in ads is more than theoretical, as shown by one way the Republican National Committee’s response to President Joe Biden’s announcement in April that he was running for a new term. The RNC released a video, which it disclosed in advance was generated by AI, showing a second Biden term would lead to a dystopian future with cities overrun by crime, a Chinese invasion of Taiwan and bank failures sending markets into free fall.
Jim Saksa contributed to this report.
The post FEC deadlocks on whether to govern deepfake campaign ads appeared first on Roll Call.