Over the past month, members of the Huntr bug bounty platform dedicated to AI and ML have identified multiple severe vulnerabilities in popular solutions, including MLflow, ClearML, and Hugging Face. The most critical issues involve four severe vulnerabilities in MLflow, with a maximum CVSS score of 10. Among them is CVE-2023-6831, a path traversal bug during the deletion of artifacts, enabling attackers to bypass validation checks and delete any server file. Another critical flaw, CVE-2023-6977, allows path validation bypass, potentially enabling attackers to read sensitive files. All four vulnerabilities were addressed in MLflow 2.9.2, along with a high-severity server-side request forgery (SSRF) bug.
Hugging Face Transformers, another popular tool, faced a critical-severity vulnerability (CVE-2023-7018) due to the lack of restrictions in a function for loading vocab.pkl files from a remote repository. This oversight could allow attackers to load a malicious file and achieve remote code execution (RCE). The vulnerability was resolved in Transformers version 4.36. Additionally, the Huntr community identified a high-severity stored cross-site scripting (XSS) flaw (CVE-2023-6778) in ClearML, an end-to-end platform for automating ML experiments. This flaw, found in the Markdown editor component, could lead to the injection of malicious XSS payloads, potentially compromising user accounts. All vulnerabilities were reported to project maintainers 45 days before the public disclosure by Protect AI.
Protect AI, a company involved in the discoveries, also noted a critical-severity Paddle command injection bug (CVE-2024-0521) without disclosing specific details. However, it confirmed that all vulnerabilities were reported to project maintainers well in advance of the public report. The discoveries highlight the vigilance of the Huntr community in uncovering and responsibly disclosing vulnerabilities in AI and ML solutions, contributing to the ongoing improvement of security in these technologies.