Meta, the parent company of Facebook, has issued a warning that threat actors are using the popularity of generative AI like ChatGPT to deliver malware. Hackers are attempting to trick victims into installing malicious apps and browser extensions on their devices by posing as ChatGPT or similar AI tools.
Since March, Meta’s security analysts have found around 10 malware families posing as ChatGPT or similar tools to compromise accounts across the internet.
Meta’s Q1 2023 Security Reports stated that some of these malicious extensions included working ChatGPT functionality alongside the malware, likely to avoid suspicion from the stores and users. The company has detected and blocked over 1,000 unique malicious URLs from being shared on its apps and shared its findings with industry peers and the cyber defense community.
Furthermore, the rapid evolution of the generative AI space is attracting threat actors, which is why Meta recommends being vigilant of the evolving threat landscape. Meta’s Chief Information Security Officer, Guy Rosen, stated that “ChatGPT is the new crypto.” The company’s research teams are currently working on adopting generative AI to detect and block online influence campaigns.
At the same time, this warning from Meta highlights the increasing risks associated with new technologies such as generative AI. As the use of these technologies continues to grow, companies must remain vigilant and invest in cybersecurity measures to protect themselves and their customers from potential threats.
The adoption of generative AI to detect and block online influence campaigns is a promising development in the fight against cybercrime.