The UK’s National Cyber Security Centre (NCSC) has issued a warning about the potential cyber risks associated with large language models (LLMs) such as OpenAI’s ChatGPT. The agency advised caution when integrating LLMs into services or businesses due to the evolving nature of AI chatbots and the community’s incomplete understanding of their capabilities and vulnerabilities.
Furthermore, the NCSC highlighted the challenge of prompt injection attacks, wherein attackers manipulate LLM outputs for scams and cyber-attacks, exploiting the difficulty these models have in distinguishing instructions from data.
A significant risk identified in the NCSC’s blog is the potential for attackers to reprogram chatbots to perform unauthorized actions, including financial transactions. The agency acknowledged ongoing research to mitigate such attacks, but emphasized the absence of surefire solutions.
Organizations utilizing LLM APIs were advised to consider the possibility of models changing behind the API and the potential disruption to integrations. The NCSC’s blog concluded by underlining the exciting yet cautious nature of the emergence of LLMs, likening their usage to beta products and urging organizations to prioritize security.
Oseloka Obiora, CTO at RiverSafe, commented on the NCSC’s warning, highlighting the susceptibility of chatbots to manipulation and the consequent risks of fraud, illegal transactions, and data breaches.
Obiora cautioned senior executives to thoroughly assess the benefits and risks of AI trends and to implement necessary cybersecurity measures to safeguard organizations against potential harm.