Salt Security’s analysis of ChatGPT plugins uncovers vulnerabilities with potentially severe consequences, including data compromise and account takeover on third-party websites. These plugins, designed to offer up-to-date information and integrate ChatGPT with external services like GitHub and Google Drive, require permissions that could be exploited by attackers. The identified vulnerabilities, ranging from OAuth authentication flaws to zero-click exploits, highlight the risks associated with AI-driven interactions and the imperative for robust security measures.
The first vulnerability, impacting ChatGPT directly, involves OAuth authentication, allowing attackers to install malicious plugins with their credentials on victims’ accounts without confirmation. This could lead to the interception of sensitive data transmitted through ChatGPT, posing a significant threat to user privacy and security. Moreover, flaws in specific plugins like AskTheCode and Charts by Kesem AI demonstrate the potential for zero-click exploits and account takeovers, further exacerbating the security risks inherent in ChatGPT’s plugin ecosystem.
Although vendors promptly patched the vulnerabilities following their discovery in the summer of 2023, ongoing scrutiny reveals broader security concerns within AI-driven interactions. As ChatGPT plugins represented the primary means of extending functionality until November, when OpenAI introduced customizable GPTs for paying customers, the transition underscores the need for a more secure approach to integrating AI into third-party services. However, Salt Security’s findings also extend to GPTs themselves, indicating a pervasive need for comprehensive security measures to mitigate the risks associated with AI technologies.