In 2024, the cybersecurity landscape saw a significant rise in AI-related threats. Malicious actors increasingly targeted large language models (LLMs) like ChatGPT, Copilot, and Gemini. According to KELA’s annual “State of Cybercrime” report, discussions around exploiting these models surged by 94% compared to the previous year. These developments highlight a growing trend of cybercriminals leveraging advanced AI tools for malicious purposes.
Cybercriminals are developing and sharing new jailbreaking techniques on underground forums, such as HackForums and XSS.
These techniques aim to bypass the safety limitations of LLMs and create malicious content like phishing emails and malware code. One of the most effective jailbreaking methods involves word transformation, which successfully bypasses 27% of safety tests. This technique replaces sensitive words with synonyms or splits them into substrings to evade detection, making it a key concern for cybersecurity experts.
The report also revealed a dramatic rise in the number of compromised accounts for popular LLM platforms.
ChatGPT saw an alarming increase in compromised accounts, from 154,000 in 2023 to 3 million in 2024, a growth of nearly 1,850%. Similarly, Gemini experienced a surge from 12,000 to 174,000 compromised accounts, a 1,350% increase. The stolen credentials, obtained via infostealer malware, can be used to further exploit LLMs and associated services, posing a severe threat to cybersecurity.
KELA’s report highlights emerging threats such as prompt injection and agentic AI. Prompt injection is identified as a critical threat against generative AI applications, while agentic AI presents a new attack vector with its autonomous decision-making abilities. To mitigate these risks, the report urges organizations to implement robust security measures, including secure LLM integrations and advanced deepfake detection technologies. As AI-powered cyber threats evolve, proactive threat intelligence and adaptive defense strategies will be crucial in maintaining security.
Reference: