Hackers have started leveraging the capabilities of DeepSeek and Qwen AI models to create advanced malware, taking advantage of their powerful language processing abilities. These newer AI models, which are known for generating complex content, have attracted the attention of cybercriminals due to their minimal restrictions compared to more established models like ChatGPT. As a result, these models are increasingly being exploited by low-skilled hackers who can bypass basic safeguards and generate harmful content using readily available scripts and tools. The lack of robust anti-abuse mechanisms in these newer models makes them especially vulnerable to misuse.
Cybercriminals are using various techniques, such as jailbreaking prompts, to manipulate AI models into producing uncensored or unrestricted content.
This approach allows hackers to bypass the built-in limitations of the models, enabling them to craft malicious outputs. One of the methods, called the “Do Anything Now” approach, allows attackers to override previous instructions, while another involves bypassing security systems in banking applications. The ability to manipulate the AI models has led to the creation of dangerous tools, including infostealers, which are designed to steal sensitive user information, such as login credentials and financial details.
Researchers at Check Point have found that the AI models are being used to craft scripts that can bypass fraud protection systems, posing significant threats to financial institutions. These tools can extract personal data from unsuspecting users and evade detection by traditional security measures, giving hackers the upper hand in stealing valuable information. In one instance, attackers employed DeepSeek AI to discuss ways to circumvent anti-fraud protections in banking systems, showcasing the potential of these models to facilitate high-level financial theft. This highlights the growing risk that AI models pose to industries that rely heavily on secure online transactions.
In addition to stealing sensitive data, cybercriminals are using AI models like Qwen and DeepSeek to optimize spam distribution. By automating the process of sending malicious emails or messages, hackers can target larger numbers of victims more efficiently. The widespread misuse of AI technologies in cybercrime emphasizes the importance of strengthening security defenses to prevent the malicious use of advanced models. Organizations must focus on developing proactive measures to detect and mitigate the evolving threats posed by these AI-powered attacks.