The research reveals significant vulnerabilities within AI-as-a-service providers such as Hugging Face, exposing them to privilege escalation (PrivEsc) and cross-tenant attacks. Threat actors can exploit these weaknesses to gain unauthorized access to other customers’ models and even take control of the continuous integration and deployment pipelines. Shared infrastructure vulnerabilities allow attackers to execute arbitrary code, potentially compromising the entire service and leading to the leakage of sensitive data.
One of the key risks identified is the ability for attackers to upload malicious models in pickle format, enabling them to breach the service’s running custom models and potentially compromise the entire infrastructure. Furthermore, flaws in the Amazon Elastic Kubernetes Service (EKS) configuration could be exploited to obtain elevated privileges within the cluster, facilitating lateral movement and cross-tenant access. These findings highlight the critical importance of implementing robust security measures, such as enabling IMDSv2 with Hop Limit, to prevent unauthorized access and data leakage.
The research also uncovers vulnerabilities within the Hugging Face Spaces service, where attackers could achieve remote code execution via a specially crafted Dockerfile. By exploiting this vulnerability, attackers could overwrite internal container registry images, further compromising the security of the platform. Coordinated disclosure efforts by Hugging Face have led to the addressing of these issues, emphasizing the importance of prompt mitigation and user awareness.
Additionally, the research underscores the broader risks associated with relying on AI models, especially when sourced from untrusted providers. As demonstrated by the phenomenon of AI package hallucinations and the potential for jailbreaking AI assistants, caution must be exercised when leveraging AI technologies. The findings call for increased vigilance, multi-factor authentication, and strict adherence to security best practices to mitigate the evolving threats posed by malicious actors in the AI landscape.