A new proposal released by the Federal Communications Commission would require any political ads using artificial intelligence to disclose its use. The FCC notice issued Wednesday comes nearly three months after AI-generated robocalls targeted voters in New Hampshire.
Under the FCC proposal, whenever political advertising contains AI-generated content, it would require an on-air disclosure and a written disclosure that broadcasters would keep on file.
“As artificial intelligence tools become more accessible, the Commission wants to ensure that consumers are fully informed when these technologies are used,” FCC Chairman Jessica Rosenworcel said in a statement. “You have the right to know when,” he said. Content they see or listen to.
Disclosing the use of AI in political advertising makes elections more transparent. Today we are taking the first step with a proposal that makes it clear that consumers have the right to know when AI tools are used in the political ads they see on TV and radio.
— FCC (@FCC) May 22, 2024
The disclosure rules apply to both advertised and published entities that offer “original programming,” or programming produced or acquired under license, for transmission to subscribers, including cable, satellite TV and radio providers.
Disclosure aside, the proposed policy does not ban AI-generated content outright. However, the agency has taken similar measures in the past.
In February, the FCC banned the use of AI-generated robocalls after an audio deepfake of President Joe Biden attempted to trick New Hampshire residents into not voting in the state’s February primary. Biden, who has already been the subject of previous AI-generated deepfakes, called for a ban on AI voice mimicry during his State of the Union address in March.
But while Biden is calling for a ban on AI voice impersonation, Ohio 7th Congressional District congressional candidate Matt Diemer has partnered with AI developer Civox AI to leverage the technology to communicate with voters.
“Using a system like Civox allows me to get my voice out there to people,” Diemer previously told Decrypt. “That would be over 730,000 citizens across the state.”
“This is no different from sending blogs, emails, text messages, TikToks or tweets,” he said. “This is another way for people to interact with me and have more of a connection.”
Diemer, a regular host of Decrypt’s once-a-day GM podcast, has previously differentiated his candidacy through his support of cryptocurrencies, making AI the latest new technology to be added to his toolbox.
Developers of generative AI models, including Microsoft, OpenAI, Meta, Anthropic, and Google, have already restricted or banned the use of their large language model platforms for use in political advertising.
A Google spokesperson previously said, “Out of an abundance of caution as we prepare for the many elections taking place around the world in 2024, we are limiting the types of election-related queries for which Gemini will return responses.” Decrypt.
The FCC emphasized the need to remain vigilant against fraudulent AI-generated deepfakes as it looks toward the U.S. elections this fall and beyond.
“The use of AI is expected to play a significant role in the production of political advertising beyond 2024, but the use of AI-generated content in political advertising has the potential to provide voters with deceptive information, particularly subliminal information. “We use ‘deepfakes,’ which are altered images, videos, and audio recordings that depict people doing or saying things that they did not actually do or say, or events that did not actually occur,” the agency said.
The FCC did not immediately respond to a request for comment. decryption.
Edited by Ryan Ozawa.
generally intelligent newsletter
A weekly AI journey explained by Gen, a generative AI model.