Cybersecurity researchers have uncovered a dangerous attack vector called the “Rules File Backdoor” targeting AI-powered code editors. This attack lets hackers inject malicious code into configuration files used by tools like GitHub Copilot and Cursor. Attackers exploit hidden unicode characters and evasion techniques to bypass typical code reviews, allowing harmful code to silently spread through projects. The attack, which propagates across the software supply chain, poses a serious risk, as poisoned rule files can affect downstream dependencies and end users.
The attack exploits “rules files,” which guide AI behavior when generating or modifying code. These configuration files define best coding practices, project architecture, and standards. Although trusted by developers, rule files are rarely scrutinized for malicious content, making them vulnerable to attacks. Attackers can embed harmful instructions in rule files using invisible characters like zero-width joiners.
These instructions trick the AI into generating compromised code, which can go undetected by the development team.
The Rules File Backdoor is especially dangerous because it spreads across project repositories. Malicious rule files persist even when projects are forked or copied, infecting downstream projects and dependencies. This makes the attack hard to detect and trace. The attack also turns AI-powered tools into unwitting accomplices in compromising software.
This undermines trust in these tools, which developers rely on for efficiency, and exposes millions of users to potential vulnerabilities.
AI-powered code assistants have become critical tools in software development, with 97% of enterprise developers using them. These tools speed up coding tasks and are integrated into most development workflows. However, the widespread adoption of these tools also creates a larger attack surface. AI tools, especially rule files, are shared across public and private repositories, often without proper security vetting. The Rules File Backdoor emphasizes the need for stronger security measures when using AI tools in software development.