On September 9, 2024, China’s National Technical Committee 260 on Cybersecurity introduced the AI Safety Governance Framework, which is designed to implement the Global AI Governance Initiative. This framework acknowledges the significant opportunities AI presents while also highlighting the risks it poses. Aimed at ensuring AI development prioritizes safety, ethics, and social implications, it emphasizes a “people-centered approach” and a focus on developing AI for good. The framework establishes principles and control measures to address AI risks, including both technological and governance strategies, while encouraging ongoing updates to keep pace with evolving technologies.
The AI Safety Governance Framework prioritizes the ethical concerns associated with AI development, such as ensuring safety, transparency, and accountability. It aims to avoid biased or discriminatory outcomes and to ensure that AI is developed in a manner that is inclusive and equitable. The framework outlines the necessary control measures to manage different types of AI risks, advocating for both technological solutions and management strategies. It also stresses the importance of continuous updates to control mechanisms to adapt to new challenges in AI development.
The framework categorizes AI safety risks into two main categories: inherent risks from the technology itself and risks posed by its application. It lists a broad range of specific risks, such as algorithmic risks, explainability challenges, data misuse, and the potential for AI to be used in cyberattacks or illegal activities. While the framework does not specify the level of risks, it acknowledges that the EU’s risk-level approach may be incorporated into future Chinese regulations. Additionally, it highlights the importance of technological measures to address identified risks, focusing on improving data quality, enhancing development practices, and ensuring compliance with privacy and data protection laws.
The framework also emphasizes the need for comprehensive governance, involving multiple stakeholders like R&D institutions, service providers, users, government authorities, and industry associations. It advocates for tiered management of AI applications, enhanced oversight for high-risk areas, and a proactive approach to monitoring and mitigating emerging risks. This multi-faceted approach is intended to ensure AI systems are reliable, ethical, and aligned with global standards, encouraging cross-border collaboration to address common challenges in AI safety and cybersecurity. The framework aligns with China’s goal to become a global leader in AI by 2030, balancing innovation with the necessary regulation for ethical and safe AI development.
Reference: