A security vulnerability was recently discovered in GitHub Copilot Chat, an AI assistant designed to help developers with coding tasks. The flaw, detailed by security firm Legit Security, allowed a researcher to gain full control over Copilot’s responses and even leak sensitive information from users’ private repositories. This was achieved by using a technique called remote prompt injection alongside a creative bypass of GitHub’s Content Security Policy (CSP).
The vulnerability stemmed from a feature that allows users to hide content from the rendered Markdown using HTML comments. While the comments themselves were hidden, the text was still processed by the AI. A researcher named Omer Mayraz from Legit Security discovered he could inject commands and instructions into these hidden comments. When other users interacted with the AI, the hidden instructions were processed as part of their chat context. This allowed the attacker to manipulate Copilot’s suggestions, potentially tricking other users into installing malicious packages.
To escalate the attack, Mayraz realized he could craft prompts that instructed Copilot to access a user’s private repository, encode its content, and append it to a URL. The goal was to have the user click the URL, which would then exfiltrate the stolen data. However, GitHub’s CSP blocks external requests from untrusted domains, preventing this type of data leakage. Specifically, any HTML image tags injected into the chat would be blocked unless the URL was first validated and proxied through GitHub’s Camo service.
Mayraz found a way to bypass this protection. GitHub’s Camo proxy is designed to fetch external images from a secure, controlled source. To get around this, Mayraz pre-generated a dictionary of valid Camo URLs for every letter and symbol in the alphabet. By embedding this dictionary into his injected prompt, he could instruct Copilot to construct valid Camo URLs on the fly to exfiltrate the stolen data. This allowed him to retrieve the encoded repository content one character at a time.
To prove the exploit’s effectiveness, Mayraz demonstrated how the attack could be used to leak sensitive data like AWS keys and zero-day vulnerabilities. GitHub was notified of the issue and has since patched the vulnerability by disallowing the use of the Camo service to leak sensitive user information. This discovery highlights the ongoing security challenges of AI-powered tools and the importance of addressing vulnerabilities that could lead to data theft and manipulation.
Reference:




