Jacob Krut, a security engineer and bug bounty hunter at Open Security, discovered a significant vulnerability while developing a custom GPT—a personalized version of ChatGPT designed for a specific function or knowledge area. The flaw was located within the ‘Actions’ section, where users define how their custom GPT interacts with external services using APIs. This feature used user-supplied URLs that were not properly validated, creating an opening for a Server-Side Request Forgery (SSRF) attack.
SSRF vulnerabilities involve using specially crafted URLs to force a server to make unauthorized requests to internal network resources that an attacker would normally be blocked from reaching. This exploit vector is highly concerning because a successful attack can often pivot into sensitive internal services. This is why SSRF has been listed on the OWASP Top 10 list of critical web application security risks since 2021, as noted by security experts, who emphasize the potential for a wide blast radius from a single server-side request.
In this specific case involving ChatGPT, Krut successfully used the vulnerability to query a local endpoint tied to the Azure Instance Metadata Service (IMDS). IMDS is a component of the Azure cloud platform used for application configuration and management. Crucially, the IMDS identity is what authenticates the service to other cloud resources. Christopher Jess, a senior R&D manager at Black Duck, called the flaw a “textbook example” of how small validation gaps can escalate into “cloud-level exposure.”
By obtaining the ChatGPT Azure IMDS identity’s access token, the researcher demonstrated that an attacker could have potentially gained access to the underlying Azure cloud infrastructure utilized by OpenAI. This highlights the severity of this often-overlooked attack vector, as the access token could allow an attacker to pivot into internal services and privileged cloud identities.
The researcher promptly reported the vulnerability to OpenAI via their BugCrowd bug bounty program. The vendor assigned the flaw a ‘high severity’ rating and patched it quickly. While OpenAI offers up to $100,000 for critical vulnerabilities, the average recent payout has been less than $800, and the highest publicly listed reward since May was $5,000, leaving the actual compensation for this security hole uncertain.
Reference:






