The European Union‘s AI Act, effective August 1, 2024, introduces strict cybersecurity and incident reporting requirements for companies using AI systems. These measures address the rising threat of cyberattacks targeting AI models, a concern mirrored by similar initiatives in the US, like the NIST guidelines. The Act imposes obligations on companies to safeguard high-risk AI systems, ensuring accuracy, resilience, and robust security throughout their lifecycle.
High-risk AI systems under the AI Act must be designed to maintain security against errors and cyberattacks. Providers of these systems are required to disclose key characteristics and limitations to users, and general-purpose AI (GPAI) models face additional cybersecurity obligations due to their systemic risks. The Act mandates that companies implementing AI models adhere to specific resilience measures to protect against unauthorized access and other vulnerabilities.
In addition to cybersecurity, the AI Act enforces strict incident reporting rules. Companies must report serious incidents involving AI systems, such as those causing harm to individuals or critical infrastructure, within tight deadlines. The Act’s reporting requirements, similar to the GDPR, ensure that organizations swiftly notify authorities of incidents, with further follow-up reporting expected as necessary.
To mitigate AI-related cyber risks, organizations are advised to implement strong information governance and regularly test their AI models. Maintaining control over training datasets, ensuring data integrity, and preparing for evolving cyber threats are crucial steps in complying with the new regulations. The Act also highlights legal and regulatory risks, urging companies to update security policies and conduct exercises to enhance their AI-focused cybersecurity strategies.
Reference: