A North Korean hacking group called Kimsuky has been identified for its use of AI to create counterfeit South Korean military ID cards. According to the cybersecurity firm Genians, these AI-generated images were a key component in a spear-phishing campaign aimed at deceiving targets. By impersonating a defense-related institution, the attackers sent emails to victims claiming to be involved in the ID issuance process for military officials. This strategy leveraged the fake IDs to make the phishing emails appear more credible and increase the likelihood that recipients would click on a malicious link.
The campaign was first spotted by the Genians Security Center on July 17, and it represented an evolution in the group’s tactics. This new approach followed a series of similar phishing attacks by Kimsuky in June, all of which deployed the same type of malware. The attackers specifically targeted individuals such as researchers in North Korean studies, human rights activists, and journalists. This suggests a strategic focus on gathering intelligence or disrupting the work of those who focus on North Korea. The use of a consistent malware payload across campaigns indicates a sophisticated and persistent threat actor.
The fake IDs were attached to the phishing emails as PNG files and were expertly designed to mimic genuine military ID cards. Genians researchers confirmed with a 98% probability that the images were deepfakes, meaning they were generated by artificial intelligence. When a victim downloaded the attached image, an accompanying file, ‘LhUdPC3G.bat,’ was also installed. This file was the key to the attack, as it was designed to execute once downloaded, initiating a malicious program that allowed for internal data theft and remote control of the compromised system.
The report also highlighted how the attackers likely bypassed the safety protocols of the large language model (LLM) used to create the fake IDs. Typically, AI services like ChatGPT refuse to generate illegal content such as copies of government IDs. However, the hackers used a technique known as prompt injection to circumvent these restrictions. They may have framed their request in a way that made the AI believe it was creating a legitimate mock-up or a sample design, rather than a fraudulent copy. This method demonstrates how readily available AI tools can be manipulated for malicious purposes.
The successful use of AI in this attack serves as a significant warning about the growing threat of deepfake technology. The Genians researchers pointed out that creating counterfeit IDs using AI is “technically straightforward” and thus requires a higher degree of caution. This event underscores a new frontier in cybercrime, where attackers are leveraging powerful AI tools to create more convincing and sophisticated phishing scams. The ease with which these images can be generated presents a serious challenge for cybersecurity professionals and end-users alike, emphasizing the need for enhanced digital literacy and security measures.
Reference: