Protect AI, an AI cybersecurity startup, has disclosed eight vulnerabilities in the open-source supply chain utilized for developing in-house AI and ML models. These vulnerabilities, outlined in Protect AI’s February Vulnerability Report, include critical and high severity issues, each assigned a CVE number for tracking. They range from arbitrary file writes to remote code execution flaws, posing significant risks to AI/ML development pipelines.
Traditional software bill of materials (SBOMs) are insufficient for securing open source code used in AI/ML development, as they don’t account for the complexity of AI/ML pipelines. Daryan Dehghanpisheh, co-founder of Protect AI, emphasizes the necessity of a specialized AI/ML bill of materials (BOM) to address the unique risks posed by AI, such as data poisoning and model bias. Without such measures, in-house developers lack visibility into vulnerabilities within the machine learning pipeline, leaving them reliant on third-party expertise or tools like Protect AI’s Guardian product and huntr program.
Protect AI’s vulnerability detection methods include scanning and bounty programs, with its Guardian product utilizing AI/ML scanning to provide a secure gateway. Additionally, the firm’s huntr program, launched in August 2023, engages a community of independent bounty hunters to discover new vulnerabilities. These initiatives underscore Protect AI’s commitment to enhancing AI/ML model security, leveraging collective efforts to detect and mitigate vulnerabilities effectively. As the threats to AI/ML models continue to evolve, initiatives like Protect AI’s are crucial in ensuring the integrity and security of AI-driven technologies.