Cybersecurity researchers have uncovered a sophisticated attack, named “LLMjacking” by the Sysdig Threat Research Team, which leverages stolen cloud credentials to exploit cloud-hosted large language models (LLMs) and sell unauthorized access. This attack begins by breaching a system running a vulnerable version of the Laravel Framework (CVE-2021-3129), followed by the extraction of Amazon Web Services (AWS) credentials. These credentials are then used to gain access to LLM services such as Anthropic’s Claude models.
The attackers use an open-source Python script to check and validate keys for various cloud AI offerings, including AWS Bedrock, Google Cloud Vertex AI, Mistral, and OpenAI. The validation phase involves minimal legitimate queries to assess the capabilities and quotas of the credentials. Additionally, the attackers employ a reverse proxy tool called oai-reverse-proxy, allowing them to provide access to the compromised accounts without revealing the underlying credentials, thus facilitating the monetization of their efforts.
This method is a significant shift from traditional prompt injection and model poisoning attacks, as it enables attackers to monetize access to LLMs while the compromised cloud account owner bears the cost, potentially exceeding $46,000 per day. Moreover, by maxing out quota limits, attackers can disrupt legitimate use of the models, impacting business operations.
Organizations are strongly advised to implement detailed logging and continuously monitor cloud logs for suspicious or unauthorized activities. Additionally, maintaining robust vulnerability management processes is crucial to prevent initial breaches. By taking these steps, organizations can better protect themselves against the financial and operational impacts of LLMjacking.