The U.K. national institute for artificial intelligence, The Alan Turing Institute, has urged the government to establish “red lines” against the use of generative AI in scenarios where technology could take irreversible action without direct human oversight. The institute’s report emphasizes the unreliability and error-prone nature of generative AI tools in high-stakes contexts within national security, cautioning against excessive trust in AI-generated outputs.
It calls for a shift in mindset to address unintentional ways in which generative AI poses national security risks and recommends specific mitigations, including recording actions of autonomous agents and attaching warnings to every stage of generative AI output. The report highlights concerns about autonomous agents, a specific application of generative AI, requiring close oversight in national security contexts. While acknowledging the potential for accelerating national security analysis, critics argue that the technology falls short of human-level reasoning.
The report suggests implementing safety measures, such as recording actions taken by autonomous agents and attaching warnings to AI outputs. It also proposes stringent restrictions in areas requiring “perfect trust,” like nuclear command and control. The report addresses the malicious use of generative AI, recommending government support for watermarking features resistant to tampering. Despite the U.K. government’s efforts to lead responsible AI development, the report raises questions about the pace of AI regulation, emphasizing the need for updates to existing regulations over the introduction of AI-specific rules.
It notes the challenges in addressing AI-generated content and recommends practical approaches, such as supporting watermarking features. The report underscores the ongoing global debate about AI governance and the balance between innovation, security, and regulatory measures to mitigate potential risks.