OpenAI has now taken down many ChatGPT accounts that were linked to state-backed hacking operations and widespread disinformation campaigns. The company’s latest detailed report documents extensive efforts by accounts connected to China, Russia, North Korea, Iran, and also the Philippines. The illicit uses of ChatGPT by these various state-backed actors were largely split into three distinct categories by OpenAI’s researchers. These categories include social media comment generation, malware refinement for cyberattacks, and also various different large-scale foreign employment scam operations. Four of the ten specific use cases that were detailed in the report were directly attributed to sophisticated malicious actors based in China.
The artificial intelligence company announced that it has banned dozens of accounts that it saw using ChatGPT to bulk generate various social media posts. Many of the identified China-based accounts issued their prompts in Chinese and then sought their generated responses in English on divisive topics. The accounts then created social media comments in English, Chinese, and Urdu which were then posted on platforms like TikTok, X, and Facebook. Russian ChatGPT accounts were also observed generating German-language content about this year’s federal elections in Germany and also criticizing NATO. OpenAI also found similar social media comment generation campaigns from various threat actors that are located in Iran, covering many geopolitical topics.
OpenAI has attributed some of the accounts on its platform to well-known nation-state hacking groups like APT5 and also APT15 from China. These particular accounts generated content related to brute-forcing passwords and also sought direct assistance with writing various malicious attack scripts. Separately, Russian-speaking hackers were seen using their own ChatGPT accounts to develop and refine different types of sophisticated Windows malware. These Russian threat actors were stealthy, using temporary email addresses to sign up for accounts and limiting each one to a single conversation. OpenAI named this malware “ScopeCreep” and said it was specifically designed to infect the devices of various different video game players.
As part of its well-known and wide-ranging IT worker scheme, North Koreans have used ChatGPT profusely to generate many fake job resumes. These fraudulent resumes and also detailed personas could then be used by them to apply for various remote IT jobs, OpenAI said. Threat actors who were allegedly tied to North Korea were subsequently banned after being caught using the program to perform various work tasks. OpenAI also found many different accounts that were based in Cambodia which focused on generating short recruitment-style messages in multiple different languages. These messages often advertised high salaries for trivial tasks such as simply liking various different social media posts, a typical task scam lure.
Reference: