Eighteen countries, including the U.S., U.K., and others, have signed an agreement on AI safety, focused on the principle that it should be secure by design. Led by the U.K.’s National Cyber Security Centre (NCSC) and developed with the U.S.’ Cybersecurity and Infrastructure Security Agency (CISA), these guidelines represent the first global agreement of their kind. The aim is to assist developers in embedding cybersecurity as a fundamental prerequisite for AI system safety, covering aspects like secure design, development, deployment, and operation.
The guidelines, titled “Guidelines for Secure AI System Development,” target providers of AI systems using models hosted by an organization or employing external application programming interfaces. The principles emphasize understanding risks, threat modeling, secure design, and development guidelines, including supply chain security and asset management. The guidelines also cover secure deployment, ensuring the protection of infrastructure and models, incident management processes, and responsible release. Secure operation and maintenance are addressed with considerations for logging, monitoring, update management, and information sharing.
While approved by countries such as Australia, Canada, Germany, Japan, and South Korea, China, a major AI developer, is notably absent. The U.K. aims to capitalize on the recent AI safety summit held at Bletchley Park and underscores the international commitment to secure AI development and deployment. The guidelines align with broader efforts to establish global AI safety standards, as seen in President Biden’s executive order and CISA’s Roadmap for Artificial Intelligence, both focusing on safeguarding AI systems from cyber threats.