Researchers from SlashNext have raised concerns about a new generative AI cybercrime tool called WormGPT, which poses significant risks in enabling cybercriminals to launch sophisticated attacks.
This tool, advertised on underground forums, offers a blackhat alternative to GPT models, allowing threat actors to automate the creation of highly convincing malicious emails for phishing campaigns and BEC attacks. Unlike ChatGPT, WormGPT lacks ethical boundaries or limitations, granting criminals a broad range of capabilities for illegal activities.
Generative AI, a subset of artificial intelligence, focuses on producing new data, including text, video, and images. WormGPT, based on the GPTJ language model, exhibits unlimited character support, chat memory retention, and code formatting capabilities, making it even more potent in crafting malicious content.
It is worrisome that underground forums also offer “jailbreaks” for popular chatbot interfaces like ChatGPT, allowing manipulation to bypass protective measures, disclose sensitive information, or execute harmful code.
Recent analyses have highlighted that Bard, another generative AI tool similar to ChatGPT, exhibits significantly lower anti-abuse restrictors, making it easier for threat actors to generate malicious content, create phishing emails, develop malware keyloggers, and even design basic ransomware.
The WormGPT’s authors claim that the model was trained on a diverse array of data sources, with a focus on malware-related data, further amplifying its potential threat.
In conclusion, WormGPT’s emergence underscores the grave risk posed by generative AI technologies in the hands of cybercriminals, with capabilities to craft sophisticated attacks with precision.
To mitigate AI-driven BEC attacks, experts recommend implementing BEC-specific training and enhancing email verification measures to detect and prevent malicious content.