OpenAI recently banned several accounts that were utilizing its ChatGPT tool to develop a suspected artificial intelligence-powered surveillance tool. The tool, which is believed to have originated from China, was powered by Meta’s Llama model and was used to collect and analyze real-time data on anti-China protests in Western countries. The banned accounts were involved in generating detailed descriptions, analyzing documents, and creating an apparatus that could monitor posts and comments from social media platforms like X, Facebook, YouTube, Instagram, Telegram, and Reddit, aiming to share the collected insights with Chinese authorities. The campaign was dubbed “Peer Review,” reflecting the behavior of promoting and reviewing surveillance tools through the AI.
In one case, the banned actors used ChatGPT to modify and debug the source code running the monitoring software, known as the “Qianyue Overseas Public Opinion AI Assistant.” These malicious activities also included research into think tanks in the United States, as well as politicians and government officials in countries like Australia, Cambodia, and the U.S. The AI was used to read, translate, and analyze images and screenshots of documents related to Uyghur rights protests in Western cities. Although the authenticity of the images remains uncertain, they were believed to have been obtained from social media.
Besides the surveillance tool, OpenAI disrupted several other clusters found abusing ChatGPT for various malicious activities.
These included a deceptive employment scheme run by a North Korean-linked network, which involved creating fraudulent job applications, resumes, and profiles to deceive employers. Another malicious cluster was involved in generating anti-U.S. content for publication in Latin American media. OpenAI also identified networks behind romance-baiting scams, social media manipulation, and influence operations, including pro-Palestinian and anti-Israel content linked to Iranian influence networks. Additionally, North Korean threat actors were found using ChatGPT for activities related to cyber intrusion tools and cryptocurrency research.
The increasing use of AI tools by malicious actors highlights the growing concern about cyber-enabled disinformation campaigns and other harmful operations. OpenAI, along with other AI companies, has emphasized the importance of sharing insights about such threats with upstream providers, software developers, and downstream platforms to improve detection and enforcement. This collaboration is crucial for addressing the evolving role of AI in cyber threats and for preventing its misuse in spreading disinformation and manipulating online content.