Today is Election Day in Arizona, and older voters in Maricopa County are receiving phone calls with false information that local voting sites are shut down due to threats from militia groups. At the same time, in Miami, a wave of photos and videos circulating on social media falsely depict poll workers throwing away ballots. These misleading phone calls in Arizona and the deceptive videos in Florida were later identified as “deepfakes,” sophisticated forgeries crafted using artificial intelligence tools. Despite efforts to correct the misinformation, these fabrications had already spread widely across the nation before local and federal authorities could intervene.
This scenario was part of a recent drill in New York, bringing together former top officials from the U.S. and states, leaders from civil society, and technology company executives to prepare for the 2024 election. The exercise revealed alarming insights. Miles Taylor, a former high-ranking official at the Department of Homeland Security who helped coordinate the drill for the nonprofit organization The Future US, shared that participants were shocked by how quickly a few threats could escalate and overshadow the election narrative.
The exercise, named “The Deepfake Dilemma,” showcased the potential of AI-powered tools to amplify the spread of false information in an already divided society, potentially causing chaos in the 2024 election. It explored a scenario where both domestic and international actors disseminated disinformation, exploited rumors, and capitalized on political divisions. Participants and organizers discussed the exercise with NBC News, highlighting the pressing questions it raised about the readiness of federal and local officials, as well as the tech industry, to combat disinformation aimed at undermining public trust in election results.
The drill underscored the uncertainty about the roles of federal and state agencies and technology firms in detecting and publicly debunking AI-generated deepfakes. America’s decentralized electoral system faces uncharted challenges, with no clear leadership established, according to Nick Penniman, CEO of Issue One, a bipartisan organization focused on political reform and election integrity. Penniman, who participated in the exercise, emphasized the lack of infrastructure and experience in the U.S. for defending against severe threats to elections.
In a simulated “White House Situation Room,” participants, assuming roles like directors of the FBI, CIA, and the Department of Homeland Security, navigated through alarming reports from Arizona and Florida, among other unverified threats. The exercise revealed initial confusion over the authenticity of images showing ballot disposal in Miami, which had become viral, partly due to a bot-texting campaign by Russia. Eventually, it was determined that the incident was staged and artificially enhanced to appear more convincing.
The exercise participants grappled with who should publicly assure voters about the security of their polling places and the integrity of their ballots. Federal officials were concerned that any public statement might be perceived as politically motivated. The discussion highlighted the ongoing debate about who is responsible for identifying and announcing disinformation—whether it should be state-level election officials, private companies, or the White House.
Although the war game envisioned tech executives collaborating with federal officials, in reality, communication between the government and private firms on countering foreign propaganda and disinformation has significantly decreased. The collaboration that developed after the 2016 election has been undermined by political attacks and court rulings that discourage federal agencies from consulting with companies on content moderation.
This disconnect poses a significant risk to the security of the 2024 election. State governments may struggle to detect and counter AI deepfakes promptly, and technology companies and some federal agencies are hesitant to take a leading role. Kathy Boockvar, former Pennsylvania secretary of state and exercise participant, highlighted the challenges posed by potential lawsuits and accusations of suppressing free speech.
The New York drill, along with similar exercises in other states, aims to foster better communication between tech executives and government officials. However, outside the context of these simulations, social media platforms have reduced their teams for moderating false election content, and there is little indication that these companies are prepared for close cooperation with the government. State and local election offices also face a shortage of experienced staff, exacerbated by a surge in physical and cyber threats, leading to a significant turnover of election workers.
To address these challenges, a coalition of nonprofits and good-government groups plans to create a bipartisan, nationwide network of former officials, technology specialists, and others to assist local authorities in real-time detection of deepfakes and dissemination of accurate information. Penniman expressed hope that independent efforts could help bridge the gap left by the federal government and social media platforms.
Some of the largest AI tech firms have introduced safeguards and are communicating with government officials to enhance election security. OpenAI, for example, has implemented policies to prevent abuse, launched new features for transparency around AI-generated content, and developed partnerships to connect people with reliable voting information.
However, the internet hosts numerous smaller generative-AI companies and open-source tools that may not adhere to these safeguards, posing additional challenges to election integrity. The FBI’s Foreign Influence Task Force continues to lead efforts to identify and disrupt foreign malign influence operations targeting U.S. democratic institutions and values. The U.S. Cybersecurity and Infrastructure Security Agency (CISA) is also working closely with state and local agencies to protect the nation’s elections against various security risks, including foreign influence operations.
The exercise participants recognized the importance of developing a public education campaign to help voters identify deepfakes and protect themselves from foreign and domestic disinformation. The Future US and other organizations are discussing with Hollywood writers and producers the creation of public service videos to raise awareness about fake video and audio clips during the election campaign.
However, if efforts to counter disinformation and potential violence fall short, the country could face an unprecedented deadlock over the election results. Danny Crichton of Lux Capital, which co-hosted the exercise, warned of the worst-case scenario: a “stalemate” with no clear winner, highlighting concerns that the current system may not be robust enough to handle such a situation.