OpenAI has launched a bug bounty program that will enable security researchers to find vulnerabilities in its products and report them via the Bugcrowd crowdsourced security platform. The rewards range from $200 to $20,000, depending on the severity and impact of the reported issues.
The AI research firm has asked researchers to report model safety issues using a separate form rather than the bug bounty program, as such issues may require substantial research and a broader approach to address. Jailbreaks and safety bypasses that ChatGPT users have been exploiting to trick the chatbot are out of scope.
Last month, OpenAI disclosed a ChatGPT payment data leak caused by a bug in the Redis client open-source library used by its platform.
ChatGPT Plus subscribers started seeing other users’ email addresses on their subscription pages, and the bug caused the ChatGPT service to expose chat queries and personal information for roughly 1.2% of Plus subscribers. Subscriber names, email addresses, payment addresses, and partial credit card information were among the exposed information.
OpenAI’s new bug bounty program would have potentially discovered the issue earlier, and the data leak might have been avoided.
OpenAI hopes the bug bounty program will encourage researchers to report vulnerabilities, bugs, or security flaws discovered in its systems, recognizing and rewarding their contributions to keeping the company’s technology and assets secure.
The bug bounty program covers the OpenAI API and the ChatGPT chatbot but excludes model safety issues and safety bypasses that users have been exploiting.