Get all your news in one place.
100’s of premium titles.
One app.
Start reading
TV Tech
TV Tech
George Winslow

FCC Takes First Step in Effort to Require Disclosure of AI-Generated Content In Political Ads

FCC chair Jessica Rosenworcel.

WASHINGTON, D.C.—The Federal Communications Commission has announced that it will move forward on a previously announced proposal to implement new requirements to disclose AI-created content in TV and radio ads.   

The plan drew immediate opposition from both Republican-appointed commissioners, commissioner Nathan Simington and commissioner Brendan Carr. 

The FCC said the proposal aims to increase transparency by having those who already have legal duties to file information about their TV and radio advertisements with the FCC to indicate if AI is being used and make on-air disclosure of AI use. It does not propose the prohibition of such content, only disclosing the use of AI within political ads, the FCC stressed.  

“Today the FCC takes a major step to guard against AI being used by bad actors to spread chaos and confusion in our elections.  We propose that political advertisements that run on television and radio should disclose whether AI is being used,” said chairwoman Jessica Rosenworcel.  “There’s too much potential for AI to manipulate voices and images in political advertising to do nothing. If a candidate or issue campaign used AI to create an ad, the public has a right to know.”

In announcing the release of a Notice of Proposed Rulemaking that must still be adopted by the FCC before disclosure requirements could be implemented, the FCC stressed that artificial Intelligence has become powerful enough to mimic human voices and create life-like images. This year in the primary election in New Hampshire, thousands of voters got an AI-generated robocall impersonating President Biden that told them not to vote. This past summer, the campaign of Governor DeSantis was flagged for circulating fake AI-altered images of former President Trump.  Facing a rising tide of disinformation, roughly three-quarters of Americans say they are concerned about misleading AI-generated content, the FCC reported. 

The FCC also argued that Congress has granted the FCC authority regarding the political messages people see on television, hear over the radio, or receive over the phone.  Since the 1930s, the FCC has used this authority to require broadcasters to maintain a publicly available file for political ads.  This file has information about who bought a campaign ad, how much they paid for it, and when it ran. Over time Congress expanded these requirements to include ads that run on cable and satellite. These are also the policies that led to what are now familiar on-air disclosures so that every viewer and listener knows who is responsible for every ad. 

The FCC also stressed that the Federal Elections Commission is considering a separate rulemaking on AI, announcing this year that they expect to act in early summer.  In a recent letter to the FCC, the FEC’s Vice Chair wrote, “No one agency currently has the jurisdiction or the capacity to address every aspect of this large and complicated issue.” While the FEC can regulate AI use in online advertisements for federal candidates, the FCC can focus on the areas where the FEC is not able to act.  The FEC does not oversee television and radio stations.  Under the law, FEC authority over campaigns is limited to federal political candidates and does not extend to independent issue campaigns or state and local elections. 

Nearly half of states across the country have enacted laws to regulate the use of AI and deepfake technology in elections and most of these laws are bipartisan, the FCC also said. This FCC proposal would seek to bring uniformity and stability to this patchwork of state laws, seeking to bring greater transparency in our elections.

When Rosenworcel first announced that the FCC should consider proposals to disclose AI content in political ads, the idea quickly attacked by FCC Commissioner Carr as unworkable, beyond the agency’s authority and counter productive. 

Carr, who wrote a chapter in Project 2025 outlining plans for the FCC if Donald Trump is elected president, continued his opposition to the idea in a July 25 statement, saying it was part of a larger plan by the Democratic Party to regulate and limit political speech: “The Democratic National Committee (DNC) is now working to change the rules of the road in the run-up to the 2024 election. It has done so by calling on the administrative state to impose new controls on political speech before voters hit the ballot boxes this fall. The FCC proposal adopted today echoes that DNC-backed initiative and would impose new regulations on the use of AI-generated political speech at the eleventh hour. This push for new regulations comes on the heels of press reports that the DNC and Democrat candidates are `nervous about not keeping up with the GOP in embracing” artificial intelligence technologies in this election cycle.’”

Carr also argued that “Congress has not given the FCC the type of freewheeling authority over these issues that would be necessary to turn this plan into law.” 

He called the proposal “a recipe for chaos” because it would only apply to TV and regulatory ads because the FCC lacks the authority to regulate ads on the internet, social media and other digital platforms.”

The proposed rules, he added, are ill advised in the runup to a major election because it would mean that “For starters, the FCC is wading into an area rife with politicization. It is not difficult to see how partisan interests might weaponize the FCC’s rules during an election season.”

Nor, would it do much to solve the problem of political ads that confuse voters, he contended. 

“The FCC’s involvement can only amplify that confusion,” Carr wrote. “The FCC is legally powerless to adopt uniform rules in a technologically neutral fashion. Its legal authority claimed here extends only to legacy media—broadcast, cable, and satellite television—but not over-the-top content, like online streaming video and social media, where millions of Americans see political ads.  As a result, AI-generated political ads that run on traditional TV and radio will come with a government-mandated disclaimer but the exact same ad that runs on a streaming service or social media site would not.”

“I don’t see how this type of conflicting patchwork could end well,” Carr concluded. “Consumers don’t think about the content they consume through the lens of regulatory silos. They just view content on screens. Will they conclude that the absence of a government warning on an online ad means that the content must be real? And applying new regulations on the broadcasters the FCC regulates but not on their largely unregulated online competitors only exacerbates regulatory asymmetries.”  

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.