Richard Moore, chief of the United Kingdom’s Secret Intelligence Service (SIS), claimed in a rare public speech on Wednesday that “artificial intelligence [AI] will change the world of espionage, but it won’t replace the need for human spies,” while admitting that British spies are already using AI to disrupt the supply of weapons to Russia.
According to AP News, in his speech Moore painted AI as a “potential asset and major threat” and called China the “single most important strategic focus” for SIS, commonly known as MI6. He added, “We will increasingly be tasked with obtaining intelligence on how hostile states are using AI in damaging, reckless and unethical ways.”
Moore shared that “’the unique characteristics of human agents in the right places will become still more significant,’ highlighting spies’ ability to ‘influence decisions inside a government or terrorist group.’”
While speaking to an audience at the British ambassador’s residence in Prague, Moore urged Russians who oppose the invasion of Ukraine to spy for Britain. “I invite them to do what others have already done this past 18 months and join hands with us,” he said, assuring prospective defectors that “their secrets will always be safe with us” and that “our door is always open.”
While the MI6 chief spent more time talking about the Russia-Ukraine conflict, it was his comments on the West potentially “falling behind rivals in the AI race” that stood out. Moore declared that, “Together with our allies, [SIS] intends to win the race to master the ethical and safe use of AI.”
Being quite aware of AI and how it is being used by hostile states, the House Armed Services Subcommittee on Cyber, Information Technologies, and Innovation heard testimony from AI experts at Tuesday’s hearing, “Man and Machine: Artificial Intelligence on the Battlefield.”
The subcommittee’s goal was to discuss “the barriers that prevent the Department of Defense [DOD] from adopting and deploying artificial intelligence (AI) effectively and safely, the Department’s role in AI adoption, and the risks to the Department from adversarial AI.”
Alexandr Wang, founder and CEO of Scale AI, testified that during an investor trip to China, he witnessed first-hand the “progress that China was making toward developing computer vision technology and other forms of AI.” Wang was troubled at the time, “because this technology was also being used for domestic repression, such as persecuting the Uyghur population.”
Wang wrote in his statement:
It was evident that the Chin[ese] Communist Party (CCP) had already strategized how to harness AI for advancing its military and economic power. As China President Xi Jingping [sic] declared that same year, “[We must] ensure that our country marches in the front ranks where it comes to theoretical research in this important area of AI and occupies the high ground in critical and AI core technologies.”
Wang continued, “China deeply understands the potential for AI to disrupt warfare and is investing heavily to capitalize on the opportunity: It considers AI to be a ‘historic opportunity’ for ‘leapfrog development’ of national security technology.” Noting that China is outspending the United States nearly threefold on AI innovation, he warned that the U.S. “is at risk of being stuck in an innovator’s dilemma because it is comfortable and familiar with investing in traditional sources of military power.”
“While we are making sense of this technology and conceptualizing a framework for how to use it, Chinese leaders are actively working to use AI to tighten their grip domestically and expand their reach globally. It’s time to act. The U.S. must learn to embrace AI innovation before we are disrupted,” Wang said.
The American Enterprise Institute’s Klon Kitchen testified that AI — and generative AI (GenAI) particularly — “offers a substantial opportunity for the United States to reclaim its technological and economic upper hand on the global stage.”
Kitchen warned that “the rapid advancement of GenAI poses a significant near-term threat concerning its potential use against us by foreign adversaries.” He stated that “GenAI enhances the sophistication and effectiveness of cyber threats,” as “AI algorithms can learn and adapt to defensive measures, making attacks more evasive and difficult to detect,” and added that “there is also the potential for foreign adversaries to leverage GenAI for social engineering and psychological manipulation.” He continued:
To address this near-term threat, it is essential for governments, cybersecurity experts, and technology companies to collaboratively develop robust defenses against GenAI-powered cyber threats. This includes leveraging AI and machine learning technologies to enhance threat detection, automate responses, and mitigate the risks posed by AI-driven attacks.
And Haniyeh Mahmoudian, a global AI ethicist with DataRobot, shared with the House committee:
Investment in AI literacy for military personnel at all levels is a key step to ensuring responsible use of AI. It is critical to educate different stakeholders about AI and AI ethics…. the Department of Defense should implement a comprehensive AI governance framework and adapt risk management processes to manage and mitigate the risks associated with AI.
Knowing how slowly the wheels of government turn, Mahmoudian warned that the DOD’s slow procurement processes and delays, especially with the ever-changing AI world, could lead to obsolete AI tools that will require retraining due to changes in data over time. That could very well give China and other potential enemies the upper hand in military AI technology innovations.