Cybersecurity researchers have uncovered a new threat lurking on the Python Package Index (PyPI) repository: a malicious package designed to steal sensitive developer information. This package, named chimera-sandbox-extensions, was downloaded 143 times and likely tricked users of Grab’s “Chimera Sandbox” service. Disguised as a helpful module, its true intent is to pilfer credentials, configuration data, and environment variables, including those related to Jamf, CI/CD pipelines, and AWS.
Upon installation, the malware employs a sophisticated technique, using a domain generation algorithm (DGA) to connect to an external server.
From this server, it downloads an authentication token, which then facilitates the retrieval of a Python-based information stealer. This stealer is a formidable tool, capable of siphoning a wide array of data from compromised systems. This includes Jamf receipts (indicating macOS targeting), Pod sandbox environment tokens, CI/CD variables, Zscaler configurations, and sensitive Amazon Web Services account details.
The breadth of data collected strongly suggests a focus on corporate and cloud infrastructure.
The stolen information is then sent back to the attacker’s domain, where the server seemingly evaluates whether the compromised machine is a valuable target for further exploitation. According to JFrog security researchers, the multi-stage and highly targeted nature of this malware distinguishes it from more common open-source threats, highlighting a concerning advancement in malicious package sophistication. This incident, alongside the recent discovery of malware-laced npm packages like eslint-config-airbnb-compat and solders, underscores the escalating risks in the software supply chain.
Beyond credential theft, the open-source supply chain is also facing a rise in cryptocurrency-related malware, including credential stealers, cryptocurrency drainers, cryptojackers, and clippers. Furthermore, the emergence of AI-assisted coding has introduced a novel threat called slopsquatting. This phenomenon occurs when large language models (LLMs) “hallucinate” plausible but non-existent package names. Malicious actors can then register these names on public registries, creating an opportunity for supply chain attacks if developers or AI agents attempt to install them.
Reference: