The Federal Elections Commission (FEC) is in the process of evaluating the regulation of AI technologies, particularly deepfakes, within political campaign advertising. Commissioner Dara Lindenbaum highlighted the significant public engagement following a 2023 petition calling for the FEC’s involvement in curbing deceptive AI practices in political ads. This petition prompted the FEC to consider amending its regulations to include provisions addressing deliberately deceptive AI-generated campaign content under the existing 11 CFR 110.16 regulation.
The FEC’s regulatory jurisdiction currently focuses on federal campaign finance laws; however, the incorporation of regulations concerning AI-based misinformation in campaign ads necessitates potential regulatory amendments and congressional intervention to broaden the FEC’s regulatory scope. The emerging threats posed by AI-generated deepfakes were underscored by instances of fraudulent robocalls during New Hampshire’s presidential primary election.
Despite the voluntary commitments by tech firms to ensure the responsible deployment of AI technologies, concerns persist regarding the unregulated use of AI tools. The State Department and cybersecurity experts have cautioned about the risks posed by foreign adversaries exploiting AI for malicious purposes, highlighting a critical gap in regulatory oversight of emerging technologies in the US.
The FEC’s consideration of regulating AI in political campaigns follows direct appeals from lawmakers, including Rep. Adam Schiff and 50 Democratic legislators, emphasizing the necessity of adapting FEC regulations to address the disruptive potential of generative AI in undermining election integrity. The bipartisan interest in regulating AI in campaign ads signals a shift towards a more comprehensive approach to safeguarding the democratic election process from the harmful impacts of AI-driven misinformation.