- US commission was divided on AI oversight jurisdiction
- ‘Fraudulent misrepresentation’ applies, Public Citizen says
The nonprofit Public Citizen has petitioned the Federal Election Commission for the second time to consider new rules to regulate AI-generated “deepfakes” in political campaign ads.
The petition, submitted to the FEC on Thursday, came after the agency rejected the consumer advocacy group’s initial petition in late June. Its commissioners deadlocked 3-3 along party lines on whether the ads featuring highly realistic audio and visual content created by artificial intelligence fell under the agency’s jurisdiction given its historically narrow focus on campaign-finance regulations and disclosures.
As concern grows over potential misuse of this new AI technology, Public Citizen reiterated its push for regulation of the technology at a federal level ahead of the 2024 elections.
“Deceptive deepfakes are already appearing in elections and it is a near certainty that this trend will intensify absent action from the Federal Election Commission and other policymakers,” wrote Public Citizen President Robert Weissman and government affairs lobbyist Craig Holman.
The group cited several recent examples of deepfakes, including images of former President
Public Citizen’s new petition focused on two key concerns commissioners had with the initial request. Commissioners had expressed doubt that the FEC had statutory authority to regulate AI content, and they said the group had failed to cite a particular regulation it sought to modify.
The group has pointed to the federal campaign law against “fraudulent misrepresentation,” which prohibits federal candidates and their employees or agents from falsely representing themselves as “speaking, writing, or acting for or on behalf of another candidate or political party.”
Public Citizen’s petition argued that deepfakes enable such illegal distortion.
“By falsely putting words into another candidate’s mouth, or showing the candidate taking action they did not, the deepfake would fraudulently speak or act ‘for’ that candidate in a way deliberately intended to damage him or her,” Weissman and Holman wrote.
The petition noted that any fraudulent misrepresentation restrictions would apply specifically to the inclusion of deepfakes, not to the general use of AI tools in campaign communications. Neither would they apply to parodies, in which the deepfake’s purpose is not to deceive voters, or in cases where there is “sufficiently prominent disclosure” that content is AI-generated.
Debate Over Power
Commissioners wavered last month on whether the FEC was the right authority to push regulation forward.
“Instead of coming to us, they should take this up with Congress,” Republican Commissioner Allen Dickerson said at the time.
Public Citizen’s new petition, though, noted Dickerson had joined fellow Republican Commissioner James Trainor in a 2021 statement issued in a case specifically addressing fraudulent misrepresentation, showing precedent for the FEC to consider such regulation.
Dickerson and Trainor’s statement cited the same provision Public Citizen did, saying that federal law grants the commission “narrow and discrete grant of authority.”
Some commissioners expressed openness last month to supporting Public Citizen’s motion. Though Democratic Chairwoman Dara Lindenbaum said she was “skeptical” about the scope of FEC jurisdiction, she voted in support to hear public comment.
Public Citzen’s new petition asked commissioners to specify in an amendment to a regulatory provision that penalties would apply if candidates fraudulently misrepresent other candidates or political parties through AI-generated content.
The group recommended this action “in view of the novelty of deepfake technology and the speed with which it is improving.”
To contact the reporter on this story:
To contact the editors responsible for this story: