In a concerning development, threat actors have capitalized on the popularity of ChatGPT to launch a copycat hacker tool known as “FraudGPT,” designed explicitly for promoting malicious activities.
Researchers have discovered advertisements for FraudGPT on the Dark Web, indicating its availability for subscription on Telegram. The tool, which boasts more than 3,000 confirmed sales and reviews, empowers cybercriminals to leverage AI in crafting sophisticated phishing campaigns and conducting various nefarious activities.
FraudGPT and its counterpart, WormGPT, demonstrate a growing trend of adversaries utilizing generative AI features to evade ethical safeguards and expand their cyber operations. The criminal actors behind FraudGPT claim to be verified vendors on underground Dark Web marketplaces, including Empire, WHM, Torrez, World, AlphaBay, and Versus, thereby making it accessible to a broader range of cybercriminals.
These AI-driven tools have proven to be highly effective in creating undetectable malware, finding vulnerabilities, and launching multi-layered phishing attacks, posing an escalating threat to organizations worldwide.
As cyber threats evolve, the adoption of generative AI tools by malicious actors poses new challenges for defenders. The use of fake social media accounts, in combination with AI-generated profile images, further complicates the identification and mitigation of disinformation campaigns.
As China’s influence operations and disinformation tactics extend beyond its borders, security experts stress the importance of information sharing and collaboration among the security community and law enforcement agencies to counter such cyber threats effectively.
The increasing prevalence of AI-enabled tools in cyberattacks necessitates a defense-in-depth strategy and the implementation of advanced security analytics to stay ahead of the adversaries.