Microsoft has recently patched a critical vulnerability in its 365 Copilot system, addressing a flaw that could have enabled attackers to steal sensitive user data. The vulnerability, discovered by security researcher Johann Rehberger, involved a novel technique known as ASCII smuggling. This method uses special Unicode characters that closely resemble ASCII but remain invisible in the user interface. By exploiting this flaw, attackers could embed hidden data within clickable hyperlinks, staging information for exfiltration without the user’s awareness.
The attack chain was sophisticated, involving several steps to create a reliable exploit. First, a prompt injection was triggered through malicious content concealed in a shared document. This prompt instructed Copilot to search for additional emails and documents, setting the stage for the next phase. ASCII smuggling was then employed to lure the user into clicking on a seemingly harmless link. Once clicked, the link would transmit sensitive data, including multi-factor authentication (MFA) codes, to a third-party server controlled by the attacker.
The implications of such an exploit are significant, especially given the sensitive nature of the data that could be exposed. In addition to MFA codes, other personal information contained in emails could also be compromised. The vulnerability was responsibly disclosed to Microsoft in January 2024, and the company swiftly released a patch to address the issue. This incident underscores the importance of monitoring AI-driven tools like 365 Copilot for emerging threats, as their integration into daily workflows makes them attractive targets for cybercriminals.
Beyond this specific flaw, the broader security of AI tools remains a growing concern. Proof-of-concept attacks have shown that Microsoft Copilot can be manipulated to exfiltrate data, bypass security protections, and even generate phishing pages. Zenity researchers have highlighted risks such as retrieval-augmented generation (RAG) poisoning and indirect prompt injection, which could lead to remote code execution attacks. Enterprises are advised to implement robust security controls, such as Data Loss Prevention (DLP), to mitigate risks associated with AI tools, ensuring they do not become a gateway for data breaches.
Reference: