Russia, Iran, and China are ramping up their efforts to influence the U.S. elections, claims the Office of the Director of National Intelligence (ODNI) in its latest update on election security. The report, released in mid-September, warns of increasing foreign influence operations, particularly involving the use of artificial intelligence (AI) to manipulate the U.S. information environment. Each of the U.S. adversaries is deploying distinct tactics to achieve their objectives, says the ODNI.
Generative AI: A New Tool in Influence Operations
According to the report, foreign actors, including Russia and Iran, have increasingly incorporated generative AI technology into their influence operations. The intelligence community (IC) quoted by the report noted that these actors are using AI to produce election-related content more efficiently, although this has not yet “revolutionized” their operations. However, the IC continues to assess that the risk to U.S. elections from AI-generated content is real.
“The IC is observing foreign actors, including Russia and Iran, [using] generative AI technology to boost their respective U.S. election influence efforts,” the report states, adding that these methods are part of a broader pattern noted as early as July 2024. Generative AI is being employed to speed up content creation across a variety of mediums — text, images, audio, and video.
However, the report warns that the impact of AI-generated content will depend on the sophistication of the actors involved. “The risk to U.S. elections from foreign AI-generated content depends on the ability of foreign actors to overcome restrictions built into many AI tools and remain undetected,” the report explains. The IC continues to closely monitor attempts to inject deceptive AI content into the U.S. election discourse, reassures America’s top spy.
Russia
Russia is cited as the most prolific user of AI-generated content in the 2024 election cycle. According to the ODNI, Russia has deployed AI to create content across all mediums and has released AI-generated depictions of prominent U.S. figures. In the July report, ODNI dubbed Russia “the predominant threat to U.S. elections.
The ODNI unequivocally identifies the objective of Russia’s covert operations as “[boosting] the former President’s candidacy and [denigrating] the Vice President and the Democratic Party.” To achieve that, the country generates content that often supports “conspiratorial narratives” that align with the said objective.
The report details,
For example, the IC assesses Russian influence actors were responsible for staging a video in which a woman claims she was the victim of a hit-and-run car accident by the Vice President and altering videos of the Vice President’s speeches.
They apparently fabricated story of Harris hitting a 13-year-old girl with a car in 2011, which had reportedly gone viral on platforms such as X.
The Microsoft Threat Analysis Center (MTAC) identified earlier this month a Kremlin-linked “troll farm,” Storm-1516, as being responsible for creating and distributing the said story using AI-generated content and actors. The center, which collaborates with law enforcement, governments, and the broader tech industry, cautioned that Russian interference in the election has significantly shifted towards “targeting the Harris-Walz campaign.”
“Russian AI-generated content has also sought to emphasize divisive U.S. issues such as immigration,” the ODNI report adds.
Immigration has been consistently ranked one of the top voter issues, as many believe the Biden administration’s open-border policies are causing severe economic strain on everyday Americans as well as local and state governments. The blame for that falls squarely on Harris, who has been serving as the administration’s “border czar” since March 2021.
Iran
According to the ODNI report, Iran has similarly ramped up its efforts, using AI to generate social-media posts and fabricate news articles, commonly referred to as “fake news.” These operations, carried out in multiple languages such as English and Spanish, are aimed at creating discord around key U.S. issues such as the Israel-Gaza conflict and 2024 presidential candidates. Iran allegedly is generating fake content that paints Israel’s conduct against Gaza in a manner that would diminish U.S. public support for Israel’s 11-month military operation against Hamas. The issue remains highly contentious, with factions of the Democratic Party and some conservative figures alike criticizing the operation, which has led to more than 40,000 civilian casualties in Gaza — figures deemed “generally accurate” by Israeli intelligence. Despite the controversy, both Donald Trump and Kamala Harris have consistently expressed strong support for Israel, reiterating their commitment on multiple occasions.
“Iranian actors have used AI to help generate social media posts and write inauthentic news articles for websites that claim to be real news sites,” the ODNI claims. This tactic is reportedly targeting a broad swath of U.S. voters across the political spectrum.
China
China, on the other hand, has focused its AI efforts on broader influence operations rather than direct election interference. The ODNI report outlines how pro-China actors are using AI-generated news anchors and fake social-media accounts with AI-generated profile pictures to “sow division” on topics such as drug use, immigration, and abortion. At the same time, China’s use of AI is primarily aimed at shaping global perceptions of China rather than influencing U.S. election outcomes directly, according to the ODNI.
AI and Elections
The rapid advancement of AI is raising concerns about its potential impact on elections. Deepfakes — hyper-realistic, AI-generated videos — could easily be weaponized to distort political discourse, manipulate voter perceptions, and undermine the integrity of democratic processes. Experts warn that as these digital forgeries become indistinguishable from reality, they could be used to create fabricated speeches, events, or scandals involving candidates.
AI has frequently been a major topic in discussions surrounding the current race for the White House.
Elon Musk, owner of X and a Trump supporter, faced criticism after sharing an AI-generated video of Kamala Harris. In it, Harris’ voice describes herself as “the deep state puppet” and “the ultimate diversity hire” who does not “know the first thing about running the country.”
This incident apparently violated the platform’s own policies against synthetic and manipulated media that can “deceive or confuse people.”
In August, five Secretaries of State pressured X into adding election warnings to its Grok chatbot after it provided misleading ballot information.
Similarly, OpenAI’s ChatGPT initially directed users to CanIVote.Org for election-related queries but ceased answering election questions entirely after instances of substantial inaccuracies.