The sale and discussion of AI language models (LLMs) like WormGPT within underground forums have raised alarm bells concerning their potential for creating mutating malware. However, a noticeable hesitation among cybercriminals to fully adopt AI for malicious purposes is apparent, contrasting with the prevalent focus on cryptocurrencies dominating these spaces. While some exploit forums express interest in the future applications of AI, others concentrate on practical experimentation, despite acknowledging the limitations that exist in their immediate utilization.
The dual-use nature of AI-driven tools such as WormGPT has heightened concerns within these discussions, prompting dialogues on their potential illicit applications, including remote access Trojans (RATs), keyloggers, and infostealers. What’s strikingly evident from these observations is the dichotomy in the cybercriminal community: while skilled actors foresee future and sophisticated applications, less skilled individuals engage in immediate but limited uses, often revealing operational security vulnerabilities in their discussions.