Microsoft has initiated legal action against a foreign-based threat actor group for operating a hacking-as-a-service infrastructure targeting its generative AI services. The group used sophisticated software to exploit exposed customer credentials, which were scraped from public websites. By unlawfully accessing accounts linked to Microsoft’s Azure OpenAI Service, they altered the capabilities of the AI services to produce offensive and harmful content. The group monetized their access by selling it to other malicious actors, who were provided with instructions on how to use the compromised services to generate dangerous content. This activity was discovered by Microsoft’s Digital Crimes Unit (DCU) in July 2024.
The hackers utilized a variety of tools to facilitate their illegal activities:
The hackers utilized a variety of tools to facilitate their illegal activities, including a reverse proxy service called “oai reverse proxy,” which helped them issue API calls to Microsoft’s Azure OpenAI Service using stolen API keys. They created custom applications like “de3u,” a frontend that enabled the users to access the service and generate harmful images through the DALL-E model. These unauthorized API calls were authenticated using stolen API keys and other authentication details, which were gathered through systematic theft from various customers, including U.S.-based companies.
Microsoft’s investigation revealed that the hackers took steps to conceal their actions by attempting to delete certain pages and repositories linked to their operation. This included removing GitHub repositories and infrastructure related to the reverse proxy tool after the seizure of the domain “aitism.net,” which was central to their operation. Despite these attempts to cover their tracks, Microsoft successfully identified the perpetrators and obtained a court order to seize the domain, along with implementing measures to prevent further illegal access to its AI services.
The attack highlights a growing trend of cybercriminals exploiting AI tools for malicious purposes, with Microsoft noting that similar attacks have been observed in the past. Microsoft’s legal filing emphasized that the threat actors targeted not just the Azure OpenAI Service, but other AI service providers as well, suggesting a coordinated effort to breach multiple platforms. This incident underscores the challenges of securing AI services and the ongoing threats from malicious actors who use stolen cloud credentials to abuse AI infrastructure.